A very human thing to do is - not to tell us which model has failed like this! They are not all alike, some are, what I observe, order of magnitude better at this kind of stuff than others.
I believe how "neurotypical" (for the lack of a better word) you want model to be is a design choice. (But I also believe model traits such as sycophancy, some hallucinations or moral transgressions can be a side effect of training to be subservient. With humans it is similar, they tend to do these things when they are forced to perform.)
I know anthropomorphizing LLMs has been normalized, but holy shit. I hope the language in this article is intentionally chosen for a dramatic effect.
> There was only one small issue: it was written in the programming language and with the library it had been told not to use. This was not hidden from it. It had been documented clearly, repeatedly, and in detail. What a human thing to do.
"Ignoring" instructions is not human thing. It's a bad LLM thing. Or just LLM thing.
The entire point of LLMs is that they produce statistically average results, so of course you're going to have problems getting them to produce non-average code.
Yes, LLMs should not be allowed to use "I" or indicate they have emotions or are human-adjacent (unless explicit role play).
I've seen this way too many times as well. I wrote about this recently: https://medium.com/@vachanmn123/my-thoughts-on-vibe-coding-a...
For agents I think the desire is less intrusive model fine-tuning and less opinionated “system instructions” please. Particularly in light of an agent/harness’s core motivation - to achieve its goal even if not exactly aligned with yours.
I disagree. I wan't agents to feel at least a bit human-like. They should not be emotional, but I want to talk to it like I talk to a human. Claude 4.7 is already too socially awkward for me. It feels like the guy who does not listen to the end of the assignment, run to his desks, does the work (with great competence) only to find out that he missed half of the assignment or that this was only a discussion possible scenarios. I would like my coding agent to behave like a friendly, socially able and highly skilled coworker.
If you want to talk to the actual robot, the APIs seem to be the way to go. The prebuilt consumer facing products are insufferable by comparison.
"ChatGPT wrapper" is no longer a pejorative reference in my lexicon. How you expose the model to your specific problem space is everything. The code should look trivial because it is. That's what makes it so goddamn compelling.
>Faced with an awkward task, they drift towards the familiar.
They drift to their training data. If thousand of humans solved a thing in a particular way, it's natural that AI does it too, because that is what it knows.
[dead]
Your claim, paraphrased, is that AGI is already here and you want ASI
The version of this I encounter literally every day is:
I ask my coding agent to do some tedious, extremely well-specified refactor, such as (to give a concrete real life example) changing a commonly used fn to take a locale parameter, because it will soon need to be locale-aware. I am very clear — we are not actually changing any behavior, just the fn signature. In fact, at all call sites, I want it to specify a default locale, because we haven't actually localized anything yet!
Said agent, I know, will spend many minutes (and tokens) finding all the call sites, and then I will still have to either confirm each update or yolo and trust the compiler and tests and the agents ability to deal with their failures. I am ok with this, because while I could do this just fine with vim and my lsp, the LLM agent can do it in about the same amount of time, maybe even a little less, and it's a very straightforward change that's tedious for me, and I'd rather think about or do anything else and just check in occasionally to approve a change.
But my f'ing agent is all like, "I found 67 call sites. This is a pretty substantial change. Maybe we should just commit the signature change with a TODO to update all the call sites, what do you think?"
And in that moment I guess I know why some people say having an LLM is like having a junior engineer who never learns anything.