As someone in 99th percentile in terms of token usage, it's super clear to me where the agent will not be able to replace my judgement, two areas:
1. if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.
2. LLM has zero intention, and rely on you to decide what to build and more importantly not build.
As such, I'm the limit of the numbers of concurrent agents working fo rme, because there is still a limit to my output of engineering judgement. I do get better, both at generating and delivering this judgement. Exceeding this limit, the output becomes garbage.
At this current year and date, the AI does not automate me in anyway, I have something that they just flat out don't have.
Playing devil's advocate here, I'm not antagonizing you but thinking out loud.
> if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.
That's a current technical limitation. Are you so sure it won't be overcome in the near/mid future?
> LLM has zero intention, and rely on you to decide what to build and more importantly not build
But work is being done to even remove or automate this layer, right? It can be hyperbole (in fact, it is) but aren't Anthropic et al predicting this? Why wouldn't your boss, or your boss' boss, do this instead of you? If they lack the judgment currently, are you so sure they cannot gain it, if they don't have to waste time learning how to code? If not now, what about soon-ish?
> At this current year and date, the AI does not automate me in anyway
Not now, granted. But what about soon? In other words, shouldn't you be worried as well as excited?