I'm not making a semantic argument, I'm making a practical one.
> prompting the AI with words that you'd actually use when asking a human to perform the task, generally works better
Ok, but why would you assume that would remain true? There's no reason it should.
As AI starts training on code made by AI, you're going to get feedback loops as more and more of the training data is going to be structured alike and the older handwritten code starts going stale.
If you're not writing the code and you don't care about the structure, why would you ever need to learn any of the jargon? You'd just copy and paste prompts out of Github until it works or just say "hey Alexa, make me an app like this other app".