That's a great insight about iterating on bespoke tools. I have seen the most speed up when diving into new tools, or making new tools as AI can make the initial jump quite painless, and I can get straight to the problem solving. But I get barely any speedup using it on legacy projects in tools I know well. Often enough it slows me down so net benefit is nil or worse.
Another commentor said it makes the easy part easy, and the hard part harder, which I resonate with at the moment.
I am pretty excited by being able to jump deep into real problems without code being the biggest bottleneck. I love coding but I love solving problems more, and coding for fun is very different to coding for outcomes.
That's my observation / fear as well. It makes delivering something that sort of works easy. It makes doing that well more difficult by obscuring the problem domain from the humans and expanding the standard library of tools into patterns of using said standard library. Hope they're correct for your use case.
There's also the question of the true cost of all the hardware, electricity, and potential output that's being tossed onto the pyres. We aren't getting the real Cortana from the books / games; we're getting GIR trained on the corpus of fallible human code, prompted by fallible humans.