logoalt Hacker News

BloondAndDoomtoday at 12:34 AM26 repliesview on HN

Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them? We have very strong open (weight) Chinese models possibly only 6 months behind of them, gene is out of the bottle, is 6 months of difference really that important? And they don’t have good reasons for that 6 months to stay that way.

Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?


Replies

unleadedtoday at 12:55 AM

It's a marketing strategy. If it's almost certainly conscious and capable of ending the world if it desired (even if it isn't), imagine how good it could be at building your dream SaaS!

show 4 replies
hxycgdtoday at 1:29 AM

It is not about the US or the Chinese. Its about the "Elephant Rider" mind everyone has. Once the Elephant has been injured or scared what it does next is not easy to control, and what story the Rider makes up to maintain coherence becomes another layer of the deeper problem. If the story resonates more elephants get triggered. Social media/attention economy make it even more complex to calm things down.

Modern Corporations are a failed experiment because they dont think Elephant injuries and fears are something they have to worry about it. If you compare the curiculum of a business school to a seminary the difference in how they think about fear and anxiety at individual and group level and what to do about it is totally different. We are learning as unpredictability accelerates its very important to pay attention to hurt and repair mechanisms.

show 2 replies
johnfntoday at 12:37 AM

Some people think there will be an exponential takeoff, which means that a 6 month lead effectively rounds up to infinity.

show 4 replies
ghshephardtoday at 1:13 AM

Do any of the open weight models from smaller labs exist if they can't distill from the SoTA models that are throwing billions of dollars of compute into pretraining?

show 2 replies
isodevtoday at 12:50 AM

> just their usual marketing

I think that’s a very common element for most US tech corps. Apple, Google, Microsoft, Meta, X etc - they’re all “making a dent in the universe”. It’s unfortunate when their employees and CEOs loose track of the line that separates marketing from reality

show 1 reply
ctolsentoday at 6:12 AM

Having worked with both proprietary and open weight SOTA models lately, my view is it's definitely not 6 months, it's less -- and shrinking.

cjtoday at 12:37 AM

These kind of people have highly paid emoliyees surrounding them on all sides propping them up and very likely making it very easy for them to actually believe it.

It feels like they actually believe it, rather than just “marketing” and I don’t know which is worse.

jatoratoday at 3:02 AM

6 months is an incredible amount of time to control AGI or ASI by yourself. That lead is insurmountable.

show 3 replies
pants2today at 3:13 AM

Presumably because it takes 6 months to distill Claude - but if they keep it closed like they are doing with Mythos it may take significantly longer.

show 1 reply
therealpygontoday at 1:11 AM

Especially when Google is in the far better position to come out ahead…imo.

Edit: so as not to simply spout an opinion, the reasoning I believe this is that Google has a real business already and were already deep into ML and AI research long before they had competitors — they just botched making it a product in the beginning. Anthropic and OpenAI meanwhile are paying hand over fist to subsidize user acquisition. Also, “Deepmind”. I don’t think much more needs to be said regarding that team, and Google has been working on AI since before either Altman or Amodei applied to go to college. They have a vast amount of researchers and resources, their own hardware and data centers (already, not “planned”) and it appears to be showing more recently (in my opinion).

show 1 reply
WarmWashtoday at 2:16 AM

GLM 5.1, widely held up as the model at the heals, perhaps ever surpassing western models....

Gets 5% on ARC-AGI2 private set.

Chinese models are suspiciously good a benchmarks.

show 1 reply
zeeedtoday at 5:18 AM

To be fair, the other 50% of the story is that we collectively listen.

It’s been a long while since I found a Chinese CEO’s post on HN.

tyleotoday at 12:40 AM

I suppose most just haven’t seen the Chinese models in practice. I haven’t. I was skeptical of AI coding until using Claude Code in February. I saw and I believed. I’ve only done that with Google, OpenAI, and Anthropic’s models so far.

0xbadcafebeetoday at 5:43 AM

Well they represent the future of America (since we will soon be banning all the Chinese companies, the way Z.ai was banned, under the perennial authoritarian excuse of "national security"; in 2028, Trump's political machine will seize control of all national AI and block outside ones, and we'll all be trapped inside this machine we created).

Whether fortunately or unfortunately, America still holds a lot of global chips in the grand poker game of humanity. So American companies do indeed still have an outsized influence on humanity's future. That is likely changing, as the American empire continues to crumble and it loses its financial hegemony. But we aren't quite there yet.

neyatoday at 12:53 AM

Two words: Delusion and overconfidence.

"You're absolutely right!" Right after fucking up my entire codebase isn't anywhere near AGI, let alone "having the power to control it"

show 1 reply
scrupletoday at 2:49 AM

> Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them?

He wants to build the AI that makes people's lives better. Okay. Did the people ask? Do they have a say? It's all very easy for a billionaire to say when it's just him and a couple of people in his cohort in the driver's seat.

Beyond that I'd like to simply know why he thinks any of this is his responsibility. It seems much more obvious to me that he simply found himself in the right place at the right time and is trying to seize it all for himself as if it's his to take.

show 1 reply
nthypestoday at 12:35 AM

I have the same feelings

efficaxtoday at 1:32 AM

you have to talk that way if you’re going to raise 100 billion in venture capital. it’s the grift

georgemcbaytoday at 12:55 AM

When you are raising many billions of dollars to build up your infrastructure, you don't have much choice but to project a belief that the eventual outcome will result in a situation where there will be a return on that money.

That said, I do agree with you that the moats are very shallow and any particular frontier AI lab is unlikely to "win the AI race" and capture enough value to be worth the amount of investment they are all currently burning.

stavrostoday at 12:45 AM

The Chinese models are distilled from GPT and Claude, so it's not like China would pull ahead if those companies went away for six months. They really are at the forefront of innovation right now, as much as I hate to think of the consequences of this (a single company owning a superintelligence is basically a nightmare scenario for me).

show 3 replies
fookertoday at 1:44 AM

Reminds me of the silicon valley episode where every company repeated the phrase “making the world a better place”.

MaxPocktoday at 3:56 AM

Your(American)future will be controlled by them. Very soon,they will get the government to ban bad Chinese open source models and your choice will only be these good democratic closed source AIs.

gorpy7today at 3:07 AM

i’ve often thought that less than one second is all you need.One of my fun super powers when someone asks what i’d like to have is 1 second ahead of everyone else- that’s all i need. i honest don’t know where the distillation conversation is at. is it real, is it ongoing? i think that aspect would big one. Your point is valid if it’s valid. i’m not a great global citizen, you know, lots going on out and about.

show 2 replies
tinyhousetoday at 12:41 AM

They own the best models and will probably keep owning the best models for a while. They have much more compute now and more data to keep improving their models on many tasks. Open source won't close the gap in 6 months. They are also trying to block other companies from distilling their models [0].

[0] https://www.anthropic.com/news/detecting-and-preventing-dist...

show 2 replies
kingkawntoday at 1:30 AM

6 months will be an impossible gap once the thing starts closed loop self improvement

show 1 reply