Yeah, I can confirm, before LLMs I definitely thought coding would be the last thing to go.
Beyond training data availability, it's always easiest to automate what you understand. Since software engineering is a subset of the discipline of AI/LLMs, it has been automated to the extent that it has. Everything else involves more domain knowledge.
I'm still of the opinion that coding will be the last thing to go. LLMs are an enabler, sure, but until they integrate some form of neuroplasticity they're stuck working on Memento-guy-sized chunks of code. They need a human programmer to provide long context.
Maybe some new technique will change that, but it's not guaranteed. At this point I think we can safely surmise that scaling isn't the answer.
I'm not sure what happens when you replace coders with 'prompt generalists' and the output has non-trivial bugs. What do you do then? The product is crashing and the business is losing money? Or a security bug? You can't just tell llm's "oh wait what you made is bad, make it better." At a certain point, that's the best it can make. And if you dont understand the security or engineering issue behind the bug, even if the llm can fix this, you don't have the skills to prompt it correctly to do so.
I see tech as 'the king's guard' of capitalism. They'll be the last to go because at the end of the day, they need to be able to serve the king. 'Prompt generalists' are like replacing the king's guard with a bunch of pampered royals who 'once visited a battlefield.' Its just not going to work when someone comes at the king.
> before LLMs I definitely thought coding would be the last thing to go.
While LLMs do still struggle to produce high quality code as a function of prompt quality and available training data, many human software developers are surprised that LLMs (software) can generate quality software at all.
I wonder to what extent this surprise is because people tend to think very deeply when writing software and assume thinking and "reasoning" are what produce quality software. What if the experience of "thinking" and "reasoning" are epiphenomena of the physical statistical models present in the connections of our brains?
This is an unsolved and ancient philosophical problem (i.e. the problem of duality) of whether consciousness and free will affect the physical world. If we live in a materialist universe where matter and the laws of physics are unaffected by consciousness then "thinking", "reasoning", and "free will" are purely subjective. In such a view, subjective experience attends material changes in the world but does not affect the material world.
Software developers surprised by the capabilities of software (LLMs) to write software might not be so surprised if they understood consciousness as an epiphenomenon of materiality. Just as words do not cause diaphragms to compress lungs to move air past vocal cords and propagate air vibrations, perhaps the thoughts that attend action (including the production of words) are not the motive force of those actions.