>Human programmers don't usually hallucinate things out of thin air
Oh, you wouldn't believe how much they do that too, or are unreliable in similar ways. Bullshiting, thinking they tested x when they didn't, misremembering things, confidently saying that X is the bottleneck and spending weeks refactoring without measuring (to turn out not to be), the list goes on.
>So no, they aren't working the exact same way.
However they work internally, most of the time, current agents (of say, last year and above) "describe the issue exactly in the way a human programmer would".
>Human programmers don't usually hallucinate things out of thin air
Oh, you wouldn't believe how much they do that too, or are unreliable in similar ways. Bullshiting, thinking they tested x when they didn't, misremembering things, confidently saying that X is the bottleneck and spending weeks refactoring without measuring (to turn out not to be), the list goes on.
>So no, they aren't working the exact same way.
However they work internally, most of the time, current agents (of say, last year and above) "describe the issue exactly in the way a human programmer would".