You absolutely need to have some basic level of abilities if you are going to be operating AI coding tools for software that is going to have paying users.... I use these tools very very heavily I'm not against them at all and I don't scrutinize every single line of code that they write but it is very often that I catch it doing some brain dead stuff and if I didn't have a decade plus of experience I wouldn't know that it was brain dead.
I think we're rediscovering management from first principles. The main selling point of AI is that it writes code faster than you could. Checking it line by line undoes most of that benefit. In the same vein, there's no real benefit to leading a team if you plan on supervising every task.
But here's the thing: for humans, this is manageable because we've come up with a number of mechanisms to select for dependable workers and to compel them to behave (carrot and stick: bonuses if you do well, prison if you do something evil). For LLMs, we have none of that. If it deletes your production database, what are you going to do? Have it write an apology letter? I've seen people do that.
So I think that your answer - that you'll lean on your expertise - is not sufficient. If there are no meaningful consequences and no predictability, we probably need to have stronger constraints around input, output, and the actions available to agents.