Look I have no idea if this is related, but I have noticed recently, talking to other developers, that the addiction / allure of the speed that coding with AI agents gives you is leading to a relaxation of their standard quality bar. This doesn't even feel like the evil overlords whipping them more, it is self-inflicted.
When you can get multiple different agents to all work on things and you are bouncing between them, careful review of their code becomes the bottleneck. So you start lowering your bar to "good enough", where "good enough" is not really good enough. It's a new good enough, which is like you squinting at the code and as long as the shape is vaguely ok, and the code works (where that means you click around a bit and it seems fine), it's ok.
Over time you lose your "theory"[1] of the software, and I would imagine that makes you effectively lower your bar even further, because you are less attached to what good should look like.
This is all anecdotal on my end, but it does feel like quality as a whole in the industry has tanked in the last maybe 12 months? It feels like there are more outages than normal. I couldn't find a good temporal outage graph, but if you trust this: https://www.catchpoint.com/internet-outages-timeline , the number of outages in 2025 is orders of magnitude up on 2024.
Maybe this is because there are way more, maybe this is because they are now tracking way more, I'm not sure. But it definitely _feels_ like we are in for a bumpy ride over the next few years.
[1] in the Programming as Theory Building sense: https://gareth.nz/ai-programming-as-theory-building.html
Exactly right and by the time you get that theory back could you have just it all yourself?
Exactly, half of a system exists as code but the other half exists as a mental model in the minds of the devs. With AI the former will, now much more quickly, outrun or deviate from the latter and then the problems of long-term reliability, maintainability and confidence in validation and delivery are just beginning.