> to the point we saturate systems downstream
But that's point of TFA, no? Now that writing code is no longer the bottleneck, the upstream and downstream processes have become the new bottlenecks, and we need to figure out how to widen them.
As I see it, the end goal for all of this is generating software at the speed of thought, or at least at the speed of speech. I want the digital butler to whom I could just say - "I'm not happy with the way things happened to day, please change it so that from here on, it'll be like x" - and it'll just respond with "As you wish", and I'll have confidence that it knows me well enough and is capable enough to have actually implemented the best possible interpretation of what I asked for, and that the few miscommunications that do occur would be easy to fix.
We're obviously not close that yet, but why shouldn't we build towards it?
> Now that writing code is no longer the bottleneck
I think it’s contestable that writing the code was ever the main bottleneck.
> As I see it, the end goal for all of this is generating software at the speed of thought, or at least at the speed of speech.
The question is what distinguishes that from having AGI, and if the answer is “nothing”, then that will change the whole game entirely again.