Betteridge's law proven correct once again. The answer to the headline is: no. Perhaps it will be true in the future, nobody knows.
I'm skeptical the extent to which people publishing articles like this use AI to build non-trivial software, and by non-trivial I mean _imperfect_ codebases that have existed for a few years, battle tested, with scars from hotfixes to deal with fires and compromises to handle weird edge cases/workarounds and especially a codebase where many developers have contributed to it over time.
Just this morning I was using Gemini 3 Pro working on some trivial feature, I asked it about how to go about solving an issue and it completely hallucinated a solution suggesting to use a non-existing function that was supposedly exposed by a library. This situation has been the norm in my experience for years now and, while this has improved over time, it's still very, very common occurrence. If it can't get these use cases down to an acceptable successful degree, I just don't see how much I can trust it to take the reins and do it all with an agentic approach.
And this is just a pure usability perspective. If we consider the economics aspect, none of the AI services are profitable, they are all heavily subsidized by investor cash. Is it sustainable long term? Today it seems as if there is an infinite amount of cash but my bet is that this will give in before the cost of building software drops by 90%.
>I asked it about how to go about solving an issue and it completely hallucinated a solution suggesting to use a non-existing function that was supposedly exposed by a library.
Yeah, that's a huge pain point in LLMs. Personally, I'm way less impacted by them because my codebase is only minimally dependent on library stuff (by surface area) so if something doesn't exist or whatever, I can just tell the LLM to also implement the thing it hallucinated :P
These hallucinations are usually a good sign of "this logically should exist but it doesn't exist yet" as opposed to pure bs.