Exactly. AI is minimally useful for coding something that you couldn't have been able to code yourself, given enough time, without explicitly investing time in generic learning not specific to that codebase or particular task.
Although calling AI "just autocomplete" is almost a slur now, it really is just that in the sense that you need to A) have a decent mental picture of what you want, and, B) recognize a correct output when you see it.
On a tangent, the inability to identify correct output is also why I don't recommend using LLMs to teach you anything serious. When we use a search engine to learn something, we know when we've stumbled upon a really good piece of pedagogy through various signals like information density, logical consistency, structuredness/clarity of thought, consensus, reviews, author's credentials etc. But with LLMs we lose these critical analysis signals.
Absolutely spot on.
You are calling out the and subtle nuance that many don’t get…
You could have another LLM tell you which is the correct output.
I've been trying to articulate this exact point. The problem w/ LLM's is that at times they are very capable but always unreliable.