logoalt Hacker News

hdndjsbbstoday at 6:31 PM3 repliesview on HN

I'm sure someone is going to miss the point and say "this is political correctness gone too far!"

It seems impossible to produce a safe LLM-based model, except by withholding training data on "forbidden" materials. I don't think it's going to come up with carfentanyl synthesis from first principles, but obviously they haven't cleaned or prepared the data sets coming in.

The field feels fundamentally unserious begging the LLM not to talk about goblins and to be nice to gay people.


Replies

lelanthrantoday at 9:21 PM

> . I don't think it's going to come up with carfentanyl synthesis from first principles,

Why not? It's got access to all the chemistry in the world. Whu won't it be able synthesise something from just chemistry knowledge?

nonethewisertoday at 6:45 PM

"Do say gay" laws.

stulttoday at 6:44 PM

> I don't think it's going to come up with carfentanyl synthesis from first principles, but obviously they haven't cleaned or prepared the data sets coming in.

I mean, why not? If it has learned fundamental chemistry principles and has ingested all the NIH studies on pain management, connecting the dots to fentanyl isn't out of the realm of possibility. Reading romance novels shows it how to produce sexualized writing. Ingesting history teaches the LLM how to make war. Learning anatomy teaches it how to kill.

Which I think also undercuts your first point that withholding "forbidden" materials is the only way to produce a safe LLM. Most questionable outputs can be derived from perfectly unobjectionable training material. So there is no way to produce a pure LLM that is safe, the problem necessarily requires bolting on a separate classifier to filter out objectionable content.