logoalt Hacker News

nottorpyesterday at 7:59 AM0 repliesview on HN

I don't think specific examples matter.

My opinion is that since neural networks and especially these LLMs aren't quite deterministic, any kind of 'we want to avoid liability' censorship will affect all answers, related or unrelated to the topics they want to censor.

And we get enough hallucinations even without censorship...