Isn't that what the author means?
"it still requires genuine expertise to spot the hallucinations"
"works very well if you do know what you are doing"
But it can work well even if you don't know what you are doing (or don't look at the impl).
For example, build a TUI or GUI with Claude Code while only giving it feedback on the UX/QA side. I've done it many times despite 20 years of software experience. -- Some stuff just doesn't justify me spending my time credentializing in the impl.
Hallucinations that lead to code that doesn't work just get fixed. Most code I write isn't like "now write an accurate technical essay about hamsters" where hallucinations can sneak through lest I scrutinize it; rather the code would just fail to work and trigger the LLM's feedback loop to fix it when it tries to run/lint/compile/typecheck it.
But the idea that you can only build with LLMs if you have a software engineer copilot isn't true and inches further away from true every month, so it kinda sounds like a convenient lie we tell ourselves as engineers (and understandably so: it's scary).
The author headline starts with "LLMs are a failure", hard to take author seriously with such a hyperbole even if second part of headline ("A new AI winter is coming") might be right.