As one of the naysayers who talked a lot about the original study, I enthusiastically endorse any attempt at all to actually measure AI productivity. An increase from 20% slowdown to 20% speedup over the past year seems broadly consistent with my understanding of how things have gone. I think I remain classified as a "naysayer", though, because the "booster" case has gone from "I'm multiple times more productive" to "I never have to look at code my AI agents just handle everything" over the same period.
I think the issue was with incomplete context. Even before the original METR study came out, there were a number of larger-scale studies that showed a 15 - 30% boost, starting as far back as 2024. I often mention them, though they require some explanation, so this thread and linked comments may be useful: https://news.ycombinator.com/item?id=46559254
However those studies never got as much airtime as the METR study, and this has created an imbalanced perspective.
My take is that studies like this are extremely useful, but a lagging indicator of the true extent of AI-assisted coding. Especially since the latest tools are something else entirely.
I am not at the "never look at code again" stage, the old habits are just too ingrained... but I'm starting to look less frequently because I rarely find anything to fix. I can see a path from where I'm at to the outlandish claims people have been making.