logoalt Hacker News

Uehreka01/15/202611 repliesview on HN

It feels like a lot of people keep falling into the trap of thinking we’ve hit a plateau, and that they can shift from “aggressively explore and learn the thing” mode to “teach people solid facts” mode.

A week ago Scott Hanselman went on the Stack Overflow podcast to talk about AI-assisted coding. I generally respect that guy a lot, so I tuned in and… well it was kind of jarring. The dude kept saying things in this really confident and didactic (teacherly) tone that were months out of date.

In particular I recall him making the “You’re absolutely right!” joke and asserting that LLMs are generally very sycophantic, and I was like “Ah, I guess he’s still on Claude Code and hasn’t tried Codex with GPT 5”. I haven’t heard an LLM say anything like that since October, and in general I find GPT 5.x to actually be a huge breakthrough in terms of asserting itself when I’m wrong and not flattering my every decision. But that news (which would probably be really valuable to many people listening) wasn’t mentioned on the podcast I guess because neither of the guys had tried Codex recently.

And I can’t say I blame them: It’s really tough to keep up with all the changes but also spend enough time in one place to learn anything deeply. But I think a lot of people who are used to “playing the teacher role” may need to eat a slice of humble pie and get used to speaking in uncertain terms until such a time as this all starts to slow down.


Replies

orbital-decay01/15/2026

> in general I find GPT 5.x to actually be a huge breakthrough in terms of asserting itself when I’m wrong

That's just a different bias purposefully baked into GPT-5's engineered personality on post-training. It always tries to contradict the user, including the cases where it's confidently wrong, and keeps justifying the wrong result in a funny manner if pressed or argued with (as in, it would have never made that obvious mistake if it wasn't bickering with the user). GPT-5.0 in particular was extremely strongly finetuned to do this. And in longer replies or multiturn convos, it falls into a loop on contradictory behavior far too easily. This is no better than sycophancy. LLMs need an order of magnitude better nuance/calibration/training, this requires human involvement and scales poorly.

Fundamental LLM phenomena (ICL, repetition, serial position biases, consequences of RL-based reasoning etc) haven't really changed, and they're worth studying for a layman to get some intuition. However, they vary a lot model to model due to subtle architectural and training differences, and impossible to keep up because there are so many models and so few benchmarks that measure these phenomena.

show 4 replies
aeneas_ory01/16/2026

"Still on Claude Code" is a funny statement, given that the industry is agreeing that Anthropic has the lead in software generation while others (OpenAI) are lagging behind or have significant quality issues (Google) in their tooling (not the models). And Anthropic frontier models are generally "You're absolutely right - I apologize. I need to ..." everytime they fuck something up.

zeroonetwothree01/16/2026

Why is it every time anyone has a critique someone has to say “oh but you aren’t using model X, which clearly never has this problem and is far better”?

Yet the data doesn’t show all that much difference between SOTA models. So I have a hard time believing it.

show 4 replies
raincole01/16/2026

People desperately want 'the plateau' to be true because it means our jobs would be safe and we could call ourselves experts again. If the ground is keep moving then no one is truly an expert. There is just no enough time to achieve expertise when the paradigm shifts every six months.

show 1 reply
alternatetwo01/15/2026

Claude is still just like that once you’re deep enough in the valley of the conversation. not exactly that phrase but things like that’s the smoking gun or so. nothing has changed.

show 1 reply
PaulDavisThe1st01/16/2026

> I haven’t heard an LLM say anything like that since October, and in general I find GPT 5.x

It said precisely that to me 3 or 4 days ago when I questioned its labelling of algebraic terms (even though it was actually correct).

overgard01/16/2026

I don't see a reason to think we're not going to hit a plateua sooner or later (and probably sooner). You can't scale your way out of hallucinations, and you can't keep raising tens of billions to train these things without investors wanting a return. Once you use up the entire internets worth of stack overflow responses and public github repositories you run into the fact that these things aren't good at doing things outside their training dataset.

Long story short, predicting perpetual growth is also a trap.

show 2 replies
Q6T46nT668w6i3m01/16/2026

On balance, there’s far more evidence to support the conclusion that language models have reached a plateau.

show 1 reply
jgalt21201/16/2026

To me this seems like a classic LLM defense.

A doesn't work. You must frontier model 4.

A works on 4, but B doesn't work on 4. You doing it wrong, you must use frontier model 5.

Ok, now I use 5, A and B work, but C doesn't work. Fool, you must use frontier model 6.

Ok, I'm on 6, but now A is not working as it good as it did on A. Only fools are still trying to do A.

soulofmischief01/16/2026

Opus 4.5 seems to be better than GPT 5.2 or 5.2 Codex at using tools and working for long stretches on complex tasks.

MoltenMan01/15/2026

I agree with a lot of what you've said, but I completely disagree that LLM's are no longer sycophantic. GPT-5 is definitely still very sycophantic, 'You're absolutely right!' still happens, etc. It's true it happens far less in a pure coding context (Claude Code / Codex) but I suspect only because of the system prompts, and those tools are by far in the minority of LLM usage.

I think it's enlightening to open up ChatGPT on the web with no custom instructions and just send a regular request and see the way it responds.