logoalt Hacker News

reducesufferingyesterday at 9:37 PM3 repliesview on HN

It's because few realize how downstream most of this AI industry is of Thiel, Eliezer Yudkowsky and LessWrong.com.

Early "rationalist" community was concerned with AI in this way 20 years ago. Eliezer inspired and introduced the founders of Google DeepMind to Peter Thiel to get their funding. Altman acknowledged how influential Eliezer was by saying how he is most deserving of a Nobel Peace prize when AGI goes well (by lesswrong / "rationalist" discussion prompting OpenAI). Anthropic was a more X-risk concerned fork of OpenAI. Paul Christiano inventor of RLHF was big lesswrong member. AI 2027 is an ex-OpenAI lesswrong contributor and Scott Alexander, a centerpiece of lesswrong / "rationalism". Dario, Anthropic CEO, sister is married to Holden Karnofsky, a centerpiece of effective altruism, itself a branch of lesswrong / "rationalism". The origin of all this was directionally correct, but there was enough power, $, and "it's inevitable" to temporarily blind smart people for long enough.


Replies

techblueberrytoday at 12:56 AM

It is very weird to wonder, what if they're all wrong. Sam Bankman-Fried was clearly as committed to these ideas, and crashed his company into the ground.

But clearly if out of context someone said something like this:

"Clearly, the most obvious effect will be to greatly increase economic growth. The pace of advances in scientific research, biomedical innovation, manufacturing, supply chains, the efficiency of the financial system, and much more are almost guaranteed to lead to a much faster rate of economic growth. In Machines of Loving Grace, I suggest that a 10–20% sustained annual GDP growth rate may be possible."

I'd say that they were a snake oil salesman.

minimaltomtoday at 4:57 AM

> Anthropic was a more X-risk concerned fork of OpenAI.

What is XRisk? I would have inductively thought adult but that doesn't sound right.

show 1 reply
strange_quarktoday at 3:03 AM

I really recommend “More Everything Forever” by Adam Becker. The book does a really good job laying out the arguments for AI doom, EA, accelerationism, and affiliated movements, including an interview with Yudkowsky, then debunking them. But it really opened my eyes to how… bizarre? eccentric? unbelievable? this whole industry is. I’ve been in tech for over a decade but don’t live in the bay, and some of the stuff these people believe, or at least say they believe, is truly nuts. I don’t know how else to describe it.