I don't understand this thinking.
How many hours per week did you spend coding on your most recent project? If you could do something else during that time, and the code still got written, what would you do?
Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?
In my experience (and especially at my current job) bottlenecks are more often organizational than technical. I spend a lot of time waiting for others to make decisions before I can actually proceed with any work.
My judgement is built in to the time it takes me to code. I think I would be spending the same amount of time doing that while reviewing the AI code to make sure it isn't doing something silly (even if it does technically work.)
A friend of mine recently switched jobs from Amazon to a small AI startup where he uses AI heavily to write code. He says it's improved his productivity 5x, but I don't really think that's the AI. I think it's (mostly) the lack of bureaucracy in his small 2 or 3 person company.
I'm very dubious about claims that AI can improve productivity so much because that just hasn't been my experience. Maybe I'm just bad at using it.
All you did was changing the programming language from (say) Python to English. One is designed to be a programming language, with few ambiguities etc. The other is, well, English.
Speed of typing code is not all that different than the speed of typing English, even accounting for the volume expansion of English -> <favorite programming language>. And then, of course, there is the new extra cost of then reading and understanding whatever code the AI wrote.
> Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?
It’s sort of the opposite: You don’t get to the proper judgement without playing through the possibilities in your mind, part of which is accomplished by spending time coding.
I think OP is closer to the latter. How I typically have been using Copilot is as a faster autocomplete that I read and tweak before moving on. Too many years of struggling to describe a task to Siri left me deciding “I’ll just show it what I want” rather than tell.
"Writing code" is not the goal. The goal is to design a coherent logical system that achieves some goal. So the practice of programming is in thinking hard about what goal I want to achieve, then thinking about the sort of logical system that I could design that would allow me to verifiably achieve that goal, then actually banging out the code that implements the abstract logical system that I have in my head, then iterating to refine both the abstract system and its implementation. And as a result of being the one who produced the code, I have certainty that the code implements the system I have in mind, and that the system it represents is for for the purpose of achieving the original goals.
So reducing the part where I go from abstract system to concrete implementation only saves me time spent typing, while at the same time decoupling me from understanding whether the code actually implements the system I have in mind. To recover that coupling, I need to read the code and understand what it does, which is often slower than just typing it myself.
And to even express the system to the code generator in the first place still requires me to mentally bridge the gap between the goal and the system that will achieve that goal, so it doesn't save me any time there.
The exceptions are things where I literally don't care whether the outputs are actually correct, or they're things that I can rely on external tools to verify (e.g. generating conformance tests), or they're tiny boilerplate autocomplete snippets that aren't trying to do anything subtle or interesting.