logoalt Hacker News

shubhamjaintoday at 2:01 PM25 repliesview on HN

I was wondering if it was because of heavy-handedness of the administration, but apparently:

> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.

Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.


Replies

ACCount37today at 3:19 PM

That's because it is.

AI is powerful and AI is perilous. Those two aren't mutually exclusive. Those follow directly from the same premise.

If AI tech goes very well, it can be the greatest invention of all human history. If AI tech goes very poorly, it can be the end of human history.

show 9 replies
tyretoday at 2:35 PM

“A source familiar with the matter” is almost certainly a company spokesperson.

If they were unrelated, Anthropic wouldn’t be doing this this week because obviously everyone will conflate the two.

show 1 reply
nextaccountictoday at 7:18 PM

> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.

This sounds like a lie. But if they are telling the truth, that's a terrible timing nonetheless.

Rapzidtoday at 2:26 PM

Well before Anthropic thought they were God's gift to AI; the chosen ones protecting humanity.

With the latest competing models they are now realizing they are an "also" provider.

Sobering up fast with ice bucket of 5.3-codex, Copilot, and OpenCode dumped on their head.

show 1 reply
tenthirtyamtoday at 3:25 PM

I always enjoyed the Terminator movie series, but I always struggled to suspend my disbelief that any humans would give an AI such power without having the ability to override or pull the plug at multiple levels. How wrong I was.

N.B. the time travel aspect also required suspension of disbelief, but somehow that was easier :-)

show 1 reply
jdrosstoday at 2:15 PM

Would nuclear energy research be a good analogy then? Seems like a path we should have kept running down, but stopped bc of the weapons. So we got the weapons but not the humanity saving parts (infinite clean energy)

show 4 replies
whywhywhywhytoday at 2:35 PM

> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons

They're not really, it's always been a form of PR to both hype their research and make sure it's locked away to be monetized.

goodmythicaltoday at 4:37 PM

Isn't curing cancer just as dangerous as a nuclear bomb? Especially considering some of the gene-therapies under consideration? Because you can bet that a non-negligable portion of research in this space is being funded by governments and groups interested in application beyond curing cancer. (Autism? Whiteness? Jewishness? Race in general? Faith in general? Could china finally cure western greed? Maybe we can slip some extra compliancy in there so that the plebia- ah- population is easier to contr- ah- protect.)

Curing all cancers would increase population growth by more than 10% (9.7-10m cancer related deaths vs current 70-80m growth rate), and cause an average aging of the population as curing cancer would increase general life expectancy and a majority of the lives just saved would be older people.

We'd even see a jobs and resources shock (though likely dissimilar in scale) as billions of funding is shifted away from oncologists, oncology departments, oncology wards, etc. Billions of dollars, millions of hospital beds, countless specialized professionals all suddenly re-assigned just as in AI.

Honestly the cancer/nuclear/tech comparison is rather apt. All either are or could be disruptive and either are or could be a net negative to society while posing the possibility of the greatest revolution we've seen in generations.

whatshisfacetoday at 3:57 PM

Shouldn't we be a little more skeptical about these abstract arguments when a very concrete sale is on the line?

coffeefirsttoday at 4:23 PM

Let's suppose I believe them, that's still a bad idea.

The reason Claude became popular is because it made shit up less often than other models, and was better at saying "I can't answer that question." The guardrails are quality control.

I would rather have more reliable models than more powerful models that screw up all the time.

mikkupikkutoday at 3:26 PM

To paraphrase a deleted comment that I thought was actually making a good point, nuclear medicine and nuclear weapons are both fruit from the same tree.

scottLobstertoday at 3:27 PM

> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

Maybe some of the more naive engineers think that. At this point any big tech businesses or SV startup saying they're in it to usher in some piece of the Star Trek utopia deserves to be smacked in the face for insulting the rest of us like that. The argument is always "well the economic incentive structure forces us to do this bad thing, and if we don't we're screwed!" Oh, so ideals so shallow you aren't willing to risk a tiny fraction of your billions to meet them. Cool.

Every AI company/product in particular is the smarmiest version of this. "We told all the blue collar workers to go white collar for decades, and now we're coming for all the white collar jobs! Not ours though, ours will be fine, just yours. That's progress, what are you going to do? You'll have to renegotiate the entire civilizational social contract. No we aren't going to help. No we aren't going to sacrifice an ounce of profit. This is a you problem, but we're being so nice by warning you! Why do you want to stand in the way of progress? What are you a Luddite? We're just saying we're going to take away your ability to pay your mortgage/rent, deny any kids you have a future, and there's nothing you can do about it, why are you anti-progress?"

Cynicism aside, I use LLMs to the marginal degree that they actually help me be more productive at work. But at best this is Web 3.0. The broader "AI vision" really needs to die

kelnostoday at 6:11 PM

"It's not because of the Pentagon deal", says company that has just greased the wheels for said Pentagon deal to move forward.

Riiiiiight.

austinjptoday at 5:14 PM

> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

Amd they alone are responsible enough to govern it.

francisofasciitoday at 3:19 PM

It is a "reasonable" argument to keep yourself in the game, but it is sad nonetheless. You sacrifice your morals and do bad things, so if things get way worse, maybe you will be in a position to stop something from really bad from happening. Of course, you might just end up participating in the really bad thing.

sonusariotoday at 5:09 PM

I wonder if it stems from any of the "AI uprising" stories where humanity is viewed as the cancer to be eradicated.

show 1 reply
3acctforcomtoday at 6:58 PM

I lie too.

ameliustoday at 4:30 PM

> Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible" ones.

Reminds me of:

https://en.wikipedia.org/wiki/Paradox_of_tolerance

which has the same kind of shitty conclusion.

oatmeal1today at 5:18 PM

90% of the people cancer kills are over 50. Old people who start believing everything they see on Facebook, but continue voting, with even greater confidence in their opinions. Old people who voted in Trump. Curing cancer would be just about the worst thing AI could do.

skeptic_aitoday at 2:51 PM

OpenAI never open sourced anything relevant or in time. Internal email leaks they only cared to become billionaires.

Claude only talks about safety, but never released anything open source.

All this said I’m surprised China actually delivered so many open source alternatives. Which are decent.

Why westerns (which are supposed to be the good guys) didn’t release anything open source to help humanity ? And always claim they don’t release because of safety and then give the unlimited AI to military? Just bullshit.

Let’s all be honest and just say you only care about the money, and whomever pays you take.

They are businesses after all so their goal is to make money. But please don’t claim you want to save the world or help humans. You just want to get rich at others expenses. Which is totally fair. You do a good product and you sell.

show 3 replies
mooglytoday at 4:42 PM

"Those other companies are totally going to build the Torment Nexus, so we have no choice but to also build the Torment Nexus."

afavourtoday at 3:13 PM

It's exhausting to keep with mainstream AI news because of this. I can never work out if the companies are deluded and truly believe they're about to create a singularity or just claiming they are to reassure investors/convince the public of their inevitability.

show 2 replies
apitoday at 6:16 PM

The fear mongering always struck me as mostly a bid for regulatory capture and a moat, because without that the moat is small and transient.

cmrdporcupinetoday at 3:59 PM

We all made fun of Blake Lemoine and others for spending too many late nights up chatting with (ridiculously primitive by this year's standards) LLM chat bots and deciding they were sentient and trapped.

But frankly I feel like the founders of Anthropic and others are victim of the same hallucination.

LLMs are amazing tools. They play back & generate what we prompt them to play back, and more.

Anybody who mistakes this for SkyNet -- an independent consciousness with instant, permanent, learning and adaptation and self-awareness, is just huffing the fumes and just as delusional as Lemoine was 4 years ago.

Everyone of of us should spend some time writing an agentic tool and managing context and the agentic conversation loop. These things are primitive as hell still. I still have to "compact my context" every N tokens and "thinking" is repeating the same conversational chain over and over and jamming words in.

Turns out this is useful stuff. In some domains.

It ain't SkyNet.

I don't know if Anthropic is truly high on their own supply or just taking us all for fools so that they can pilfer investor money and push regulatory capture?

There's also a bad trait among engineers, deeply reinforced by survivor bias, to assume that every technological trend follows Moore's law and exponential growth. But that applie[s|d] to transistors, not everything.

I see no evidence that LLMs + exponential growth in parameters + context windows = SkyNet or any other kind of independent consciousness.

show 2 replies