Quote from the CEO of Anthropic in March 2025: "I think we'll be there in three to six months where AI is writing 90% of the code and then in 12 months we may be in a world where AI is writing essentially all of the code"
Why didn't they just use AI to write their own Bun instead of wasting 8-9 figures on this company? Makes no sense.
Why do people always stop this quote at the breath?
> .... and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing; What is the overall app you're trying to make; What is the overall design decision; How we collaborate with other code that has been written; How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced
(He then said it would continue improving, but this was not in the 12 month prediction.)
Source interview: https://www.youtube.com/live/esCSpbDPJik?si=kYt9oSD5bZxNE-Mn
Maybe he was correct in the extremely literal sense of AI producing more new lines of code than humans, because AI is no doubt very good at producing huge volumes of Stuff very quickly, but how much of that Stuff actually justifies its existence is another question entirely.
I actually like claude code, but that was always a risky thing to say (actually I recall him saying their software is 90% AI produced) considering their cli tool is literally infested with bugs. (Or it least it was last time I used it heavily. Maybe they've improved it since.)
I'm curious what people think of quotes like these. Obviously it makes an explicit, falsifiable prediction. That prediction is false. There are so many reasons why someone could predict that it would be false. Is it just optimistic marketing speech, or do they really believe it themselves?
Accurate for me. Accurate for basically every startup from the past 12 months. Prob not for legacy codebases, though.
Why didn't they have the AI write a JS runtime instead of this acquisition?
I think this wound up being close enough to true, it's just that it actually says less than what people assumed at the time.
It's basically the Jevons paradox for code. The price of lines of code (in human engineer-hours) has decreased a lot, so there is a bunch of code that is now economically justifiable which wouldn't have been written before. For example, I can prompt several ad-hoc benchmarking scripts in 1-2 minutes to troubleshoot an issue which might have taken 10-20 minutes each by myself, allowing me to investigate many performance angles. Not everything gets committed to source control.
Put another way, at least in my workflow and at my workplace, the volume of code has increased, and most of that increase comes from new code that would not have been written if not for AI, and a smaller portion is code that I would have written before AI but now let the AI write so I can focus on harder tasks. Of course, it's uneven penetration, AI helps more with tasks that are well-described in the training set (webapps, data science, Linux admin...) compared to e.g. issues arising from quirky internal architecture, Rust, etc.