> Within a few years it won't make sense for people to learn how to write actual code
Why?
Because LLMs are capable of sometimes working snippets of usually completely unmaintainable code?
At this point I don't think there's any point arguing with this belief. If you haven't found a way to make the models useful you will have a lot of difficulty staying relevant
I wouldn't hire anyone who doesn't use LLMs and I specifically screen for people who are good at it
You can still argue that LLMs won't replace human programmers without downplaying their capabilities. Modern SOTA LLMs can often produce genuinely impressive code. Full stop. I don't personally believe that LLMs are good enough to replace human developers, but claiming that LLMs are only capable of writing bad code is ridiculous and easily falsifiable.