Measuring productivity in software development is a hard problem, beyond the typical categorizations used in computer science. Unfortunately, I think my best answer is to go read the book I linked in the conclusion: https://link.springer.com/chapter/10.1007/978-1-4842-4221-6_...
That is an unsatisfying answer. I can point to anecdotes that suggest AI is hurting productivity or improving it, but those don't make an argument. And the extremes on either side make it very difficult to consider. How do you weigh "An LLM deleted my production database" against "I built a business on the back of AI-assisted software"?
I think we have to wait and see. And we should revisit questions of cost and value continuously, not just about LLMs, but generally in life. Most of my motivation (though not an overwhelming majority) around using LLMs right now is a mix of curiosity and wanting to avoid the fate of the steam shovel.
That’s my entire issue with AI. How quickly people are pushing adoption without the evidence to back that up. My buddy works for block and he said they fired 70% of their engineers in a bid to force the remaining 30% to use AI in order to keep up.
My very large tech company has made it a goal for each engineer to spend their salary in tokens.
You can make a big bet on AI without risking the entire company. How about we wait for some evidence that shows measurable productivity increases before betting the farm.