logoalt Hacker News

lunar_mycroftyesterday at 8:08 PM3 repliesview on HN

It "obviously" does based on what, exactly? For most devs (and it appears you, based on your comments) the answer is "their own subjective impressions", but that METR study (https://arxiv.org/pdf/2507.09089) should have completely killed any illusions that that is a reliable metric (note: this argument works regardless of how much LLMs have improved since the study period, because it's about how accurate dev's impressions are, not how good the LLMs actually were).


Replies

keedayesterday at 9:38 PM

Yes, self-reported productivity is unreliable, but there have been other, larger, more rigorous, empirical studies on real-world tasks which we should be talking about instead. The majority of them consistently show a productivity boost. A thread that mentions and briefly discusses some of those:

https://news.ycombinator.com/item?id=45379452

show 1 reply
johnsmith1840yesterday at 9:24 PM

It's a good study. I also believe it is not an easy skill to learn. I would not say I have 10x output but easily 20%

When I was early in use of it I would say I sped up 4x but now after using it heavily for a long time some days it's 20% other days -20%

It's a very difficuly technology to know when you're one or the other.

The real thing to note is when you "feel" lazy and using AI you are almost certainly in the -20% category. I've had days of not thinking and I have to revert all the code from that day because AI jacked it up so much.

To get that speed up you need to be truly focused 100% or risk death by a thousand cuts.

hu3yesterday at 8:29 PM

not OP but I have a hard metric for you.

AI multiplied the amount of code I committed last month by 5x and it's exactly the code I would have written manually. Because I review every line.

model: Claude Sonnet 3.5/4.5 in VSCode GitHub Copilot. (GPT Codex and Gemini are good too)

show 1 reply