FWIW, I did a full modernization and redesign of a site (~50k loc) over a week with Claude - I was able to ensure quality by (ahead of time) writing a strong e2e test suite which I also drove with AI, then ensuring Claude ran the suite every time it made changes. I got a bunch of really negative comments about it on HN (alluded to in my previous comment - everything from telling me the site looked embarrassing, didn't deserve to be on HN, saying the 600ms load time was too slow, etc, etc, etc), so I mostly withdrew from posting more about it, though I do think that the strategy of a robust e2e suite is a really good idea that can really drive AI productivity.
Yes, that e2e suite is a must for long term support and it probably would be a good idea to always create something like that up front before you even start work on the actual application.
I think that it pays off to revisit the history of the compiler. Initially compilers were positioned as a way for managers to side step the programmers, because the programmers have too much power and are hard to manage.
Writing assembly language by hand is tedious and it requires a certain mindset and the people that did this (at that time programming was still seen as an 'inferior' kind of job) were doing the best they could with very limited tools.
Enter the compiler, now everything would change. Until the mid 1980s many programmers could, when given enough time, take the output of a compiler, scan it for low hanging fruit and produce hybrids where 'inner loops' were taken and hand optimized until they made optimal use of the machine. This gave you 98% of the performance of a completely hand crafted solution, isolated the 'nasty bits' to a small section of the code and was much more manageable over the longer term.
Then, ca. 1995 or so the gap between the best compilers and the best humans started to widen, and the only areas where the humans still held the edge was in the most intricate close-to-the-metal software in for instance computer games and some extremely performant math code (FFTs for instance).
A multitude of different hardware architectures, processor variations and other dimensions made consistently maintaining an edge harder and today all but a handful of people program in high level languages, even on embedded platforms where space and cycles are still at a premium.
Enter LLMs
The whole thing seems to repeat: there are some programmers that are - quite possibly rightly so - holding on to the past. I'm probably guilty of that myself to some extent, I like programming and the idea that some two bit chunk of silicon is going to show me how it is done offends me. At the same time I'm aware of the past and have already gone through the assembly-to-high-level track and I see this as just more of the same.
Another, similar effect was seen around the introduction of the GUI.
Initially the 'low hanging fruit' of programming will fall to any new technology we introduce, boilerplate, CRUD and so on. And over time I would expect these tools to improve to the point where all aspects of computer programming are touched by them and where they either meet or exceed the output of the best of the humans. I believe we are not there yet but the pace is very high and it could easily be that within a short few years we will be in an entirely different relationship with computers than up to today.
Finally, I think we really need to see some kind of frank discussion about compensation of the code ingested by the model providers, there is something very basic that is wrong about taking the work of hundreds of thousands of programmers and then running it through a copyright laundromat at anything other than a 'cost+' model. The valuations of these companies are ridiculous and are a direct reflection of how much code they took from others.