> On hacker news, a very tech literate place, I see people thinking modern AI models can’t generate working code.
I am completely flooded with comments and stories about how great LLMs are at coding. I am curious to see how you get a different picture than this? Can you point me to a thread or a story that supports your view? At the moment, individuals thinking AI cannot generate working code seem almost inexistent to me.
It's a real thing, but usually tied to IT folks that tried ChatGPT ~2 years ago (in a web browser) and had to "fix" whatever it output. That situation solidified their "understanding of AI" and they haven't updated their knowledge on the current situation (because... No pressing need).
Folks like this have never used AI inside of an IDE or one of the CLI AI tools. Without that perspective, AI seems mostly like a gimmick.