I'm not sure what happens when you replace coders with 'prompt generalists' and the output has non-trivial bugs. What do you do then? The product is crashing and the business is losing money? Or a security bug? You can't just tell llm's "oh wait what you made is bad, make it better." At a certain point, that's the best it can make. And if you dont understand the security or engineering issue behind the bug, even if the llm can fix this, you don't have the skills to prompt it correctly to do so.
I see tech as 'the king's guard' of capitalism. They'll be the last to go because at the end of the day, they need to be able to serve the king. 'Prompt generalists' are like replacing the king's guard with a bunch of pampered royals who 'once visited a battlefield.' Its just not going to work when someone comes at the king.
> You can't just tell llm's "oh wait what you made is bad, make it better." At a certain point, that's the best it can make. And if you dont understand the security or engineering issue behind the bug, even if the llm can fix this, you don't have the skills to prompt it correctly to do so.
In that case, the idea is that you'd see most programmers in the company replaced by a much smaller group of prompt generalists who work for peanuts, while the company keeps on a handful of people who actually know how to program and do nothing all day long but debug AI written code.
When things crash or a security issue comes up they bring in the team of programmers, but since they only need a small number of them to get the AI code working again most programmers would be out of a job. High numbers of people who actually like touching code for a living will compete for the very small number of jobs available driving down wages.
In the long term, this would be bad because a lot of talented coders won't be satisfied being QA to AI slop and will move on to other passions. Everything AI knows it learned from people who had the skill to do great things, but once all the programmers are just debugging garbage AI code there will be fewer programmers doing clever things and posting their code for AI to scrape and regurgitate. Tech will stagnate since AI can't come up with anything new and will only have its own slop to learn from.
Personally, I doubt it'll happen that way. I'm skeptical that LLMs will become good enough to be a real threat. Eventually the AI bubble will burst as companies realize that chatbots aren't ever going to be AGI, will never get good enough to replace most of their employees, and once they see that they're still going to be stuck paying the peasant class things will slowly get back to normal.
In the meantime, expect random layoff and rehires (at lower wages) as companies try and fail to replace their pesky human workers with AI, and expect AI to be increasingly shoehorned into places it has no business being and screwing things up making your life harder in new and frustrating ways