logoalt Hacker News

og_kalu01/22/20259 repliesview on HN

"There are maybe a few hundred people in the world who viscerally understand what's coming. Most are at DeepMind / OpenAI / Anthropic / X but some are on the outside. You have to be able to forecast the aggregate effect of rapid algorithmic improvement, aggressive investment in building RL environments for iterative self-improvement, and many tens of billions already committed to building data centers. Either we're all wrong, or everything is about to change." - Vedant Misra, Deepmind Researcher.

Maybe your calibration isn't poor. Maybe they really are all wrong but there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there. And if you genuinely think that, them this kind of investment isn't so crazy.


Replies

rhubarbtree01/22/2025

The problem is, they are hugely incentivised to hype to raise funding. It’s not whether they are “wrong”, it’s whether they are being realistic.

The argument presented in the quote there is: “everyone in AI foundation companies are putting money into AI, therefore we must be near AGI.”

The best evaluation of progress is to use the tools we have. It doesn’t look like we are close to AGI. It looks like amazing NLP with an enormous amount of human labelling.

show 3 replies
skrebbel01/22/2025

> there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there.

I don't immediately disagree with you but you just accidentally also described all crypto/NFT enthusiasts of a few years ago.

show 3 replies
root_axis01/22/2025

Motivated reasoning sings nicely to the tune of billions of dollars. None of these folks will ever say, "don't waste money on this dead end". However, it's clear that there is still a lot of productive value to extract from transformers and certainly there will be other useful things that appear along the way. It's not the worst investment I can imagine, even if it never leads to "AGI"

show 1 reply
ca_tech01/22/2025

I am not qualified to make any assumptions but I do wonder if a massive investment into computing infrastructure serves national security purposes beyond AI. Like building subway stations that also happen to serve as bomb shelters.

Are there computing and cryptography problems that the infrastructure could be (publicly or quietly) reallocated to address if the United States found itself in a conflict? Any cryptographers here have a thought on whether hundreds of thousands of GPUs turned on a single cryptographic key would yield any value?

show 1 reply
DebtDeflation01/22/2025

>Maybe they really are all wrong

All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.

show 4 replies
sanderjd01/22/2025

I think it will be in between, like most things end up being. I don't think they are charlatans at all, but I think they're probably a bit high on their own supply. I think it's true that "everything is about to change", but I think that change will look more like the status quo than the current hype cycle suggests. There are a lot of periods in history when "everything changed", and I believe we're already a number of years into one of those periods now, but in all those cases, despite "everything" changing, a perhaps surprising number of things remained the same. I think this will be no different than that. But it's hard, impossible really, to accurately predict where the chips will land.

paul798601/22/2025

My prediction is a Apple loses to Open AI who releases a H.E.R. (like the movie) like phone. She is seen on your lock screen a la a Facetime call UI/UX and she can be skinned to look like whoever; i.e. a deceased loved one.

She interfaces with AI Agents of companies, organizations, friends, family, etc to get things done for you (or to learn from..what's my friends bday his agent tells yours) automagically and she is like a friend. Always there for you at your beckon call like in the movie H.E.R.

Zuckerberg's glasses that can not take selfies will only be complimentary to our AI phones.

That's just my guess and desire as fervent GPT user, as well a Meta Ray Ban wearer (can't take selfies with glasses).

show 4 replies
nejsjsjsbsb01/22/2025

I am hoping it is just the usual ponzi thing.

show 1 reply
bcrosby9501/22/2025

So they're either wrong or building Skynet.