id bet its the LLM doom loop: vaguely ask it to do something, tab to news.ycombinator.com for 30 minutes, tab back, noticed it misunderstood the prompt. Restart with new improved prompt, tab back to HN.
So yeah, probably the same thing people do anyway, just not compile time its now generating time.
We opened the Cloud Code floodgates all at once in my org. After a few months we looked at stats, and asked managers for impressions on performance changes. The API cost per engineer doesn't correlate with the apparent increases in performance, but it sure seems that the vast majority of people that used to have good reviews got a lot better, while the bottom third just didn't, even though they use the LLMs about as much. It makes the performance differences in teams look like an abyss. Someone appears stuck in a task, and we see what they've been prompting, and then one of the best seniors comes in, actually asks the questions well, and the LLM does all the debugging and all the fixing in 20 minutes.
It's not that the best performers are magical prompt engineers providing detailed instructions: They ask better questions that the LLM knows how to try to answer, and provide the specific information that the LLM would take a while finding. It's as if some people just had no "theory of mind" of the LLM, and what it can know, and others just do. It's not a living thing or anything like that, but it's still so useful to predict it, put yourself in it's shoes, so to speak. Just like you'd do with a new hire, or a random junior.