logoalt Hacker News

nine_kyesterday at 1:57 AM10 repliesview on HN

Public access, triggering a few racist responses from the model, a viral post on Xitter, the usual outrage, a scandal, the project gets publicly vilified, financing ceases. The researchers carry the tail of negative publicity throughout their remaining careers.

Why risk all this?


Replies

vintermannyesterday at 7:18 AM

Because the problem of bad faith attacks can only get worse if you fold every time.

Sooner or later society has to come emotionally to terms with the fact that other times and places value things completely different from us, hold as important things we don't care about and are indifferent to things we do care about.

Intellectually I'm sure we already know, but e.g. banning old books because they have reprehensible values (or even just use nasty words) - or indeed, refusing to release a model trained on historic texts "because it could be abused" is a sign that emotionally we haven't.

It's not that it's a small deal, or should be expected to be easy. It's basically what Popper called "the strain of civilization" and posited as explanation for the totalitarianism which was rising in his time. But our values can't be so brittle that we can't even talk or think about other value systems.

cjyesterday at 3:12 AM

Because there are easy workarounds. If it becomes an issue, you can quickly add large disclaimers informing people that there might be offensive output because, well, it's trained on texts written during the age of racism.

People typically get outraged when they see something they weren't expecting. If you tell them ahead of time, the user typically won't blame you (they'll blame themselves for choosing to ignore the disclaimer).

And if disclaimers don't work, rebrand and relaunch it under a different name.

show 1 reply
kurtis_reedyesterday at 4:09 AM

If people start standing up to the outrage it will lose its power

show 1 reply
nofriendyesterday at 4:37 AM

People know that models can be racist now. It's old hat. "LLM gets prompted into saying vile shit" hasn't been notable for years.

NuclearPMyesterday at 2:17 AM

That’s ridiculous. There is no risk.

why-o-whyyesterday at 3:18 AM

I think you are confusing research with commodification.

This is a research project, and it is clear how it was trained, and targeted at experts, enthusiasts, historians. Like if I was studying racism, the reference books explicitly written to dissect racism wouldn't be racist agents with a racist agenda. And as a result, no one is banning these books (except conservatives that want to retcon american history).

Foundational models spewing racist white supremecist content when the trillion-dollar company forces it in your face is a vastly different scenario.

There's a clear difference.

show 2 replies
Forgeties79yesterday at 2:00 AM

> triggering a few racist responses from the mode

I feel like, ironically, it would be folks less concerned with political correctness/not being offensive that would abuse this opportunity to slander the project. But that’s just my gut.

show 1 reply
Alex2037yesterday at 11:09 AM

nobody gives a shit about the journos and the terminally online. the smear campaign against AI is a cacophony, background noise that most people have learned to ignore, even here.

consider this: https://news.ycombinator.com/from?site=nytimes.com

HN's most beloved shitrag. day after day, they attack AI from every angle. how many of those submissions get traction at this point?

gnarbarianyesterday at 3:13 AM

this is FUD.

teaearlgraycoldyesterday at 2:19 AM

Sure but Grok already exists.