Your perspective is quite thoughtful, thank you. I do agree that if you are just fixing a bug or updating function internals, +20/-20 is certainty good enough and I wouldn’t oppose AI used there.
I am going to have to agree to disagree overall though, because the second there is something the AI can’t do the maintenance time for a human to learn the context and solve the problem skyrockets (in practice, for me) in a way I find unacceptable. And I may be wrong, but I don’t see LLMs being able to improve to close that gap soon because that would require a fundamental shift away from what the LLMs are under the hood.
This was a really interesting conversation, and I learned a lot from your thoughts and everyone else's on this thread.
As I said up top:
> LLMs are the first technology where everyone literally has a different experience.
I totally believe you when you say that you have not found these tools to be net useful. I suspect our different perceptions probably come from a whole bunch of things that are hard to transmit over a discussion like this. And maybe factors we're not even aware of -- I am benefiting from a lot of investment my company has made into all of the harness around this.
But I do pretty strongly believe that I'm not hallucinating how well it's all working in my specific context.