logoalt Hacker News

Dario Amodei – "We are near the end of the exponential" [video]

93 pointsby danielmorozoffyesterday at 5:55 PM188 commentsview on HN

Comments

bakibabyesterday at 6:49 PM

One of my friends and I started building a PaaS for a niche tech stack, believing that we could use Claude for all sorts of code generation activities. We thought, if Anthropic and OpenAI are claiming that most of the code is written by LLMs in new product launches, we could start using it too.

Unsurprisingly, we were able to build a demo platform within a few days. But when we started building the actual platform, we realized that the code generated by Claude is hard to extend, and a lot of replanning and reworking needs to be done every time you try to add a major feature.

This brought our confidence level down. We still want to believe that Claude will help in generating code. But I no longer believe that Claude will be able to write complex software on its own.

Now we are treating Claude as a junior person on the team and give it well-defined, specific tasks to complete.

show 3 replies
dwohnitmoktoday at 2:09 AM

This is an extremely confusing snippet from the interview for Patel to put as the title.

Amodei does not mean that things are plateauing (i.e. the exponential will no longer hold), but rather uses "end" closer to the notion of "endgame," that is we are getting to the point where all benchmarks pegged to human ability will be saturated and the AI systems will be better than any human at any cognitive task.

Amodei lays this out here:

> [with regards to] the “country of geniuses in a data center”. My picture for that, if you made me guess, is one to two years, maybe one to three years. It’s really hard to tell. I have a strong view—99%, 95%—that all this will happen in 10 years. I think that’s just a super safe bet. I have a hunch—this is more like a 50/50 thing—that it’s going to be more like one to two [years], maybe more like one to three.

This is why Amodei opens with

> What has been the most surprising thing is the lack of public recognition of how close we are to the end of the exponential. To me, it is absolutely wild that you have people — within the bubble and outside the bubble — talking about the same tired, old hot-button political issues, when we are near the end of the exponential.

Whether you agree with him is of course a different matter altogether, but a clearer phrasing would probably be "We are near the endgame."

crossbodyyesterday at 6:33 PM

The concept of the "end of the exponential" sounds like a tech version of Fukuyama's much mocked "End of History". Amodei seems to think we’ll solve all the "useful" problems and then hit a ceiling of utility.

But if you’ve read David Deutsch’s The Beginning of Infinity, Amodei’s view looks like a mistake. Knowledge creation is unbounded. Solving diseases/coding shouldn't result in a plateau, but rather unlock totally new, "better" problems we can't even conceive of yet.

It's the begining of Inifinity, no end in sight!

show 2 replies
supergilbertyesterday at 6:26 PM

I find myself coding a lot with Claude Code.. but then it's very hard to quantify the productivity boost. The first 80% seem magical, the last ones are painful. I have to basically get the mental model of the codebase in my head no matter what.

show 6 replies
lancebeetyesterday at 6:31 PM

Is "the end of the exponential" an established expression? There's no singularity in an exponential so the expression doesn't make sense to me. To me, it sounds like "the end of the exponential part", meaning it's a sigmoid, but that's obviously not what he means.

show 3 replies
poloticsyesterday at 7:34 PM

Referring to a curve with a derivative everywhere equal to its value as something that has an end gives the game away: pure fanciful nominalization with no grounding in any kind of concrete modelling of any constraints.

IMHO this is really silly: we already know that IQ is useful as a metric in the 0 to about 130 range. For any value above the delta fails to provide predictive power on real-world metrics. Just this simple fact makes the verbiage here moot. Also let's consider the wattage involved...

atomic128yesterday at 6:27 PM

Anthropic's interests are not aligned with the interests of the human species.

Quoting the Anthropic safety guy who just exited, making a bizarre and financially detrimental move: "the world is in peril" (https://www.forbes.com/sites/conormurray/2026/02/09/anthropi...)

There are people in the AI industry who are urgently warning you. Myself and my colleagues, for example: https://www.theregister.com/2026/01/11/industry_insiders_see...

Regulation will not stop this. It's time to build and deploy weapons if you want your species to survive. See earlier discussion here: https://news.ycombinator.com/item?id=46964545

show 1 reply
sidewndr46yesterday at 6:11 PM

I am always reminded of this article when the topic of 'the exponential' comes up:

https://www.julian.ac/blog/2025/09/27/failing-to-understand-...

show 4 replies
holtkam2yesterday at 6:52 PM

No matter how fast and accurately your AI apps can spit out code (or PowerPoints, or excel spreadsheets, or business plans, etc) you will still need humans to understand how stuff works. If it’s truly business critical software, you can’t get around the fact that humans need to deeply understand how and why it works, in case something goes wrong and they need to explain to the CEO what happened.

Even in a world where the software is 100% written by AI in 1 millisecond by a country of geniuses in a data center, humans still need to have their hands firmly on the wheel if they won’t want to risk their businesses well being. That means taking the time to understand what the AI put together. That will be the bottleneck regardless of how fast and smart AI is. Because unless the CEO wants to be held accountable for what the AI builds and deploys, humans will need to be there to take the responsibility for its output.

show 1 reply
readitalreadyyesterday at 6:29 PM

LLMs alone aren't the way to AGI. Perhaps something involving a merge of diffusion or other models that are based on more sensory elements, like images, time, and motion, but LLMs alone aren't going to get us there.

The end of the exponential means the start of other models.

show 1 reply
thadktoday at 12:36 AM

"We're not perfectly good at preventing some of these other [model] companies from using our models internally." — well maybe this says something about how Opus 4.5 and Opus 4.6 have the same SWE bench score.

show 1 reply
AIorNotyesterday at 11:53 PM

Its Dario's job to hype the product and he hypes the product to get the billons they need- a bit more engineering focused than Altman, but no fundamental difference.

A large language model like GPT runs in what you’d call a forward pass. You give it tokens, it pushes them through a giant neural network once, and it predicts the next token. No weights change. Just matrix multiplications and nonlinearities. So at inference time, it does not “learn” in the training sense

we need some kind of new architecture to get to next gen wow stuff e.g differentiable memory systems. ie instead of modifying weights, the model writes to a structured memory that is itself part of the computation graph. More dynamic or modular architectures not bigger scalling and spending all our money on data centers

anybody in the ML community have an answer for this? (besides better RL and RHLF and World Models)

GorbachevyChaseyesterday at 6:12 PM

Does anyone know who Dwarkesh’s patron is that boosted him in podcast world? He isn’t otherwise highly distinguished and admitted does his show prep with AI which sometimes shows in his questions. I feel like there are a very large number of tech podcasts, but there’s some marketing effect around this guy that I just don’t understand.

show 19 replies
almostdeadguyyesterday at 6:27 PM

Is no one disturbed by this? At the rate this seems to be happening its going to cause massive disruptions to society and endanger a lot of people.

show 3 replies
knivetsyesterday at 6:43 PM

The closer the bubble to popping the more desperate these people sound.

> 100% of today’s SWE tasks are done by the models.

Maybe that’s why the software is so shitty nowadays.

show 3 replies
dude250711yesterday at 6:29 PM

I think there is a parallel universe where tools like Claud Code actually truly work as advertised but I am not allowed into it...

Yet news and opinions from that world somehow seep through into my reality...

show 1 reply
seydoryesterday at 6:15 PM

We ll need a new word after 'genius'

jaredcwhiteyesterday at 6:17 PM

"Nobody disagrees we'll achieve AGI this century."

Citation needed please.

show 2 replies
surgical_fireyesterday at 7:31 PM

Eat meat, said the butcher

deathanatosyesterday at 6:20 PM

> Nobody at this point disagrees we’re going to achieve AGI this century.

Nobody. Nobody disagrees, there is zero disagreement, there is no war in Ba Sing Se.

> 100% of today’s SWE tasks are done by the models.

Thank God, maybe I can go lie in the sun then instead of having to solve everyone's problems with ancient tech that I wonder why humanity is even still using.

Oh, no? I'm still untying corporate Gordian knots?

> There is no reason why a developer at a large enterprise should not be adopting Claude Code as quickly as an individual developer or developer at a startup.

My company tried this, then quickly stopped: $$$

show 9 replies
alephnerdyesterday at 6:31 PM

[flagged]

show 4 replies
co_king_3yesterday at 6:12 PM

[flagged]

show 1 reply
theideaofcoffeeyesterday at 6:24 PM

> end of the exponential.

Oh good, hopefully it'll model itself after an exponential rise in any sort of animal populations and collapse on itself because it can no longer be sustained! Isn't that how things go in exponential systems with resource constraints? We can only hope that will be the best outcome. That would be wonderful.

reducesufferingyesterday at 7:00 PM

It's difficult to understate how wrong HN has been on AI since the founding of OpenAI and how consistently right Dario and AI X-riskers have been.

show 1 reply
viking123yesterday at 6:36 PM

I have said that Amodei is by far worse than Sam Altman. Altman wants money but this guy wants the money AND to be your dad by censoring the shit out of the model and wagging his finger at you what you can say or what you cannot. And lobbying for legislation to block competition. Also the constant "muh china" whining while these guys stole all the books in the world.

Every time I read something from Dario, it seems like he is grifting normies and other midwits with his "OHHH MY GOD CLAUDE WAS KILLING TO KILL SOMEONE! MY GOD IT WANTS TO BREAK OUT!" Then they have all their Claude constitution bullshit and other nonsense to fool idiots. Yeah bro the model with static weights is truly going to take over.

He knows what he is doing, it's all marketing and they have put shit ton of money into it if you have been following the media for the last few months.

Btw, it wasn't many months ago that this guy was hawking doubling of human life span at a group of some boomer investors. Oh yeah I wonder why he decided to bring it up there? Maybe because the audience is old and desperate and that scammers play on this weaknesses.

Truly of one of the more obnoxious people in the AI space and frankly by extension Anthropic is scammy too. I rather pay Altman than give these guys a penny and that says a lot.

show 2 replies
taco_emojiyesterday at 6:23 PM

thanks for the autoplay audio crap

Davidzhengyesterday at 10:38 PM

It's difficult for me to express this view, which I hold genuinely, without reading as lacking in humanity. However, I think it would be disastrous for humanity as a whole if we eliminate disease completely. To fight against it and to make progress in that fight is of course deeply human. And we are all affected emotionally and personally by disease of all forms. But if we win the fight against disease, I am almost sure that the human race will just end as a (long term) consequence.

show 1 reply