> 1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
They have more work to do until they don't.
The number of bank tellers went up for a while after the invention of the ATM, but then it went down, because all the demand was saturated.
We still need food, farming hasn't stopped being a thing, nevertheless we went from 80-95% of us working in agriculture and fishing to about 1-5%, and even with just those percentages working in that sector we have more people over-eating than under-eating.
As this transition happened, people were unemployed, they did move to cities to find work, there were real social problems caused by this. It happened at the same time that cottage industries were getting automated, hand looms becoming power-looms, weaving becoming programmable with punch cards. This is why communism was invented when it was invented, why it became popular when it did.
And now we have fast-fashion, with clothes so fragile that they might not last one wash, and yet still spend a lower percentage of our incomes on clothes than the pre-industrial age did. Even when demand is boosted by having clothes that don't last, we still make enough to supply demand.
Lumberjacks still exist despite chainsaws, and are so efficient with them that the problem is we may run out of rainforests.
Are there any switchboard operators around any more, in the original sense? If I read this right, the BLS groups them together with "Answering Service", and I'm not sure how this other group then differs from a customer support line: https://www.bls.gov/oes/2023/may/oes432011.htm
> 2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.
This would be absolutely correct — I've made the analogy to Amdahl's law myself previously — if LLMs didn't also do so many of the other things. I mean, the linked blog post is about answering new-starter questions, which is also not the only thing people get paid to do.
Now, don't get me wrong, I accept the limitations of all the current models. I'm currently fairly skeptical that the line will continue to go up as it has been for very much longer… but "very much longer" in this case is 1-2 years, room for 2-4 doublings on the METR metric.
Also, I expect LLMs to be worse at project management than at writing code, because code quality can be improved by self-play and reading compiler errors, whereas PM has slower feedback. So I do expect "manage the AI" to be a job for much longer than "write code by hand".
But at the same time, you absolutely can use an LLM to be a PM. I bet all the PMs will be able to supply anecdotes about LLMs screwing up just like all the rest of us can, but it's still a job task that this generation of AI is still automating at the same time as all the other bits.
I agree mostly, though personally I expect LLMs to basically give me whitewashing. They don't innovate. They don't push back enough or take a step back to reset the conversation. They can't even remember something I told them not to do 2 messages ago unless I twist their arm. This is what they are, as a technology. They'll get better. I think there's some impact associated with this, but it's not a doomsday scenario like people are pretending.
We are talking about trying to build a thing we don't even truly understand ourselves. It reminds me of That Hideous Strength where the scientists are trying to imitate life by pumping blood into the post-guillotine head of a famous scientist. Like, we can make LLMs do things where we point and say, "See! It's alive!" But in the end people are still pulling all the strings, and there's no evidence that this is going to change.