logoalt Hacker News

kouteiheikayesterday at 7:42 PM10 repliesview on HN

> Anthropic occupies a peculiar position in the AI landscape: a company that genuinely believes it might be building one of the most transformative and potentially dangerous technologies in human history, yet presses forward anyway. This isn't cognitive dissonance but rather a calculated bet—if powerful AI is coming regardless, Anthropic believes it's better to have safety-focused labs at the frontier than to cede that ground to developers less focused on safety (see our core views).

Ah, yes, safety, because what is more safe than to help DoD/Palantir kill people[1]?

No, the real risk here is that this technology is going to be kept behind closed doors, and monopolized by the rich and powerful, while us scrubs will only get limited access to a lobotomized and heavily censored version of it, if at all.

[1] - https://www.anthropic.com/news/anthropic-and-the-department-...


Replies

reissbakeryesterday at 7:52 PM

This is the major reason China has been investing in open-source LLMs: because the U.S. publicly announced its plans to restrict AI access into tiers, and certain countries — of course including China — were at the lowest tier of access. [1]

If the U.S. doesn't control the weights, though, it can't restrict China from accessing the models...

1: https://thefuturemedia.eu/new-u-s-rules-aim-to-govern-ais-gl...

show 3 replies
jimbo808today at 3:04 AM

I don't believe that they believe it, I believe that they're all in on doing all the things you'd do if your goal was to demonstrate to investors that you truly believe it.

The safety-focused labs are the marketing department.

An AI that can actually think and reason, and not just pretend to by regurgitating/paraphrasing text that humans wrote, is not something we're on any path to building right now. They keep telling us these things are going to discover novel drugs and do all sorts of important science, but internally, they are well aware that these LLM architectures fundamentally can't do that.

A transformer-based LLM can't do any of the things you'd need to be able to do as an intelligent system. It has no truth model, and lacks any mechanism of understanding its own output. It can't learn and apply new information, especially not if it can't fit within one context window. It has no way to evaluate if a particular sequence of tokens is likely to be accurate, because it only selects them based on the probability of appearing in a similar sequence, based on the training data. It can't internally distinguish "false but plausible" from "true but rare." Many things that would be obviously wrong to a human, would appear to be "obviously" correct when viewed from the perspective of an LLM's math.

These flaws are massive, and IMO, insurmountable. It doesn't matter if it can do 50% of a person's work effectively, because you can't reliably predict which 50% it will do. Given this unpredictability, its output has to be very carefuly reviewed by an expert in order to be used for any work that matters. Even worse, the mistakes it makes are meant to be difficult to spot, because it will always generate the text that looks the most right. Spotting the fuckup in something that was optimized not to look like a fuckup is much more difficult than reviewing work done by a well-intentioned human.

show 1 reply
flatlineyesterday at 9:44 PM

Ironically, this is one the part of the document that jumped out at me as having been written by AI. The em-dash and "this isn't...but" pattern are louder than the text at this point. It seriously calls into question who is authoring what, and what their actual motives are.

show 2 replies
regularizationyesterday at 7:59 PM

> to ensure AI development strengthens democratic values globally

I wonder if that's helping the US Navy shoot up fishing boats in the Caribbean or facilitating the bombing of hospitals, schools and refugee camps in Gaza.

show 2 replies
ben_wyesterday at 11:39 PM

> No, the real risk here is that this technology is going to be kept behind closed doors, and monopolized by the rich and powerful, while us scrubs will only get limited access to a lobotomized and heavily censored version of it, if at all.

Given the number of leaks, deliberate publications of weights, and worldwide competition, why do you believe this?

(Even if by "lobotomised" you mean "refuses to assist with CNB weapon development").

Also, you can have more than one failure mode both be true. A protest against direct local air polution from a coal plant is still valid even though the greenhouse effect exists, and vice versa.

show 1 reply
Aarostotleyesterday at 8:00 PM

A narrow and cynical take, my friend. With all technologies, "safety" doesn't equate to plushie harmlessness. There is, for example, a valid notion of "gun safety."

Long-term safety for free people entails military use of new technologies. Imagine if people advocating airplane safety groused about the use of bomber and fighter planes being built and mobilized in the Second World War.

Now, I share your concern about governments who unjustly wield force (either in war or covert operations). That is an issue to be solved by articulating a good political philosophy and implementing it via policy, though. Sadly, too many of the people who oppose the American government's use of such technology have deeply authoritarian views themselves — they would just prefer to see a different set of values forced upon people.

Last: Is there any evidence that we're getting some crappy lobotomized models while the companies keep the best for themselves? It seems fairly obvious that they're tripping over each other in a race to give the market the highest intelligence at the lowest price. To anyone reading this who's involved in that, thank you!

show 4 replies
patconyesterday at 11:04 PM

what if more power (from state) goes to the group that does engage in those activities, and therefore Anthropic gets marginalized as shadow sectors of state power pick a different winner?

These things are not clear. I do not envy those who must neurotically think through the first-order, second-order, third-order judgements of all of justice, "evil" and "good" that one must do. It's a statescraft level of hierarchy of concerns that would leave me immensely challenged

skybrianyesterday at 8:46 PM

I don't think that's a real risk. There are strong competitors from multiple countries releasing new models all the time, and some of them are open weights. That's basically the opposite of a monopoly.

show 1 reply
ardatayesterday at 9:24 PM

risk? certainty. it's pretty much guaranteed. the most capable models are already behind closed doors for gov/military use and that's not ever changing. the public versions are always going to be several steps behind whatever they're actually running internally. the question is what the difference will be between the corporation and pleb versions is

show 1 reply
UltraSaneyesterday at 9:51 PM

I predict that billionaires will pay to build their own completely unrestricted LLMs that will happily help them get away with crimes and steal as much money as possible.