logoalt Hacker News

dan-robertsonyesterday at 10:28 PM9 repliesview on HN

Why does being a top AI researcher so often come with this philosophical bent you describe?


Replies

ladbergyesterday at 10:30 PM

You are paying the smartest people in the world to think really really hard, and turns out they might also think really really hard about not making the world a worse place

show 4 replies
mynameisashyesterday at 10:50 PM

I would think it's because of the staggering money they're making. According to Fortune[0]:

> Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”

> Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”

If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door.

[0] https://archive.ph/lBIyY

show 1 reply
tdb7893yesterday at 11:24 PM

My experience with researchers (though not in AI) is that it's a bunch of very opinionated nerds who are mostly motivated by loving a subject. My experience is that most people who think really deeply and care about what they do also care more that their work is prosocial.

show 1 reply
wombatpmyesterday at 10:36 PM

Because it is not Macrodata Refinement and you can’t stop them thinking off the clock.

cloverichyesterday at 11:05 PM

This isnt unique to top AI researchers. Top talent has a long history of being averse to authoritarian/despotism at least in part because, by near definition, it must suppress truth. You cant build the future effectively with that approach.

janalsncmtoday at 12:12 AM

Aside from the Maslow’s hierarchy of needs points others are making, I believe it has something to do with the history of AI research.

There is a big overlap between the “rationalist” and “effective altruist” crowds and some AI research ideas. At a minimum they come from the same philosophy: define an objective, and find methods to optimize that objective. For AI that’s minimizing loss functions with better and better models of the data. For EA, that’s allocating money in ways they think are expectation-maximizing.

Note this doesn’t apply to everyone. Some people just want to make money.

derektankyesterday at 10:37 PM

Because a lot of them are academics that are doctors of philosophy

refulgentisyesterday at 10:33 PM

Maybe you’re reading “philosophical bent” as “armchair philosopher”, as in they are dabbling in a field unrelated to their profession and letting it drive their profession - worldview might have made it clearer?

show 1 reply
hermanzegermanyesterday at 10:29 PM

Because they can afford it, they are very sought after.

And smart people usually have moral convictions.

I know for some people on this website it's hard to understand, but not everything in life is about $$$

show 2 replies