> My belief based on personal experience is that in software engineering it wasn't until November/December 2025 that AI had enough impact to measurably accelerate delivery throughout the whole software development lifecycle.
Gemini 3 and Opus 4.6 were the "woah, they're actually useful now!" moment for me.
I keep saying to colleagues that it's like a rising tide. Initially the AIs were lapping around our ankles, now the level of capability is at waist height.
Many people have commented that 50% of developers think AI-generated code is "Great!" and 50% think its trash. That's a sign that AI code quality is that of the median developer. This will likely improve to 60%-40%, then 70%-30%, etc...
I don’t see definitive evidence that there is some kind of Moore’s law for model improvement though. Just because this year’s model performs better than last year’s model doesn’t mean next year’s model will be another leap. Most of the big improvements this year seem to be around tooling - I still see Opus 4.6 (which is my daily driver at work) making lots of mistakes.