> Isn’t “instruction following” the most important thing you’d want out of a model in general,
No. And for the same reason that pure "instruction following" in humans is considered a form of protest/sabotage.
It's still insanity to me that doing your job exactly as defined and not giving away extra work is considered a form of action.
Everyone should be working-to-rule all the time.
I don’t understand the point you’re trying to make. LLMs are not humans.
From my perspective, the whole problem with LLMs (at least for writing code) is that it shouldn’t assume anything, follow the instructions faithfully, and ask the user for clarification if there is ambiguity in the request.
I find it extremely annoying when the model pushes back / disagrees, instead of asking for clarification. For this reason, I’m not a big fan of Sonnet 4.5.