Nope, not anymore. Many already forgot how to do that and it's not a joke.
And putting aside the vanishing skill, there is also an issue of volume.
You could say the same thing about compiled code, actually it's worse because anything a compiler spits out is very hard to understand even for those who understand assembly.
So... Our jobs are safe then? I mean, assuming we don't also atrophy to the same extent as the 'many'?
I agree that the problem is volume, even more so than correctness.
All that LLMs and other generative models have done is enable an order of magnitude more stuff to be created cheaply. This then puts the onus and cost on the consumer of that output, hence why everyone is exhausted after a day of work that just involves looking over output. This volume of output will cause people to stop looking at all of the output and just trust the randomly generated code, and in time the quality will suffer.