> A study from METR found that when developers used AI tools, they estimated that they were working 20% faster, yet in reality they worked 19% slower. That is nearly a 40% difference between perceived and actual times!
It’s not. It’s either 33% slower than perceived or perception overestimates speed by 50%. I don’t know how to trust the author if stuff like this is wrong.
I get caught up personally in this math as well. Is a charitable interpretation of the throwaway line that they were off by that many “percentage points”?
Can you elaborate? This seems like a simple mistake if they are incorrect, I'm not sure where 33% or 50% come from here.
Isn't the study a year old by now? Things have evolved very quickly in the last few months.
> I don’t know how to trust the author if stuff like this is wrong.
She's not wrong.
A good way to do this calculation is with the log-ratio, a centered measure of proportional difference. It's symmetric, and widely used in economics and statistics for exactly this reason. I.e:
ln(1.2/0.81) = ln(1.2)-ln(0.81) ≈ 0.393
That's nearly 40%, as the post says.