LLMs are just another tool, but they're disruptive enough that existing best practices need to be either updated or re-explained.
A lot of people using LLMs seem not to have understood that you can't expect them to write code that works without testing it first!
If that wasn't clearly a problem I wouldn't have felt the need to write this.
Yep, it's a real problem. No dispute there.
My intention isn't to argue a point, just to share my perspective when I read it.
I read your response here to be saying something like "I noticed that people are misunderstood about X, so I wanted to inform them". In this case "X" isn't itself very obvious to me (For any given task, why can't you expect that a cutting edge LLM would be able to write it without requiring your testing that?) but most importantly, I don't think I would approach a pure misunderstanding (tantamount to a skills gap) with your particular framing. Again, to me it reads as patronizing.
Love the pelican on the bicycle, though. I think that's been a great addition to the zeitgeist.