Yes, you certainly can argue that, but you'd be wrong. The primary selling point of LLMs is that they solve the problem of needing skill to get things done.
I see it completely the opposite way, you use an LLM and correct all its mistakes and it allows you to deliver a rough solution very quickly and then refine it in combination with the AI but it still gets completely lost and stuck on basic things. It’s a very useful companion that you can’t trust, but it’s made me 4-5x more productive and certainly less frustrated by the legacy codebase I work on.
Yeah I whole hardheartedly disagree with this. Because I understand the basics of coding I can understand where the model gets stuck and prompt it in other directions.
If you don't know whats going on through the whole process, good luck with the end product.
They purportedly solve the problem of needing skill to get things done. IME, this is usually repeated by VC backed LLM companies or people who haven’t knowingly had to deal with other people’s bad results.
This all bumps up against the fact that most people default to “you use the tool wrong” and/or “you should only use it to do things where you already have firm grasp or at least foundational knowledge.”
It also bumps against the fact that the average person is using LLM’s as a replacement for standard google search.
That is not the entire selling point - so you are very wrong.
You very much decide how you employ LLMs.
Nobody are keeping a gun to your head to use them. In a certain way.
Sonif you use them in a way that increase you inherent risk, then you are incredibly wrong.