logoalt Hacker News

binsquaretoday at 4:14 AM2 repliesview on HN

This is oddly a case to signify there is value in an AI moderation tools - to avoid bias inherent to human actors.


Replies

baqtoday at 5:15 AM

Getting rid of bias in LLM training is a major research problem and anecdotally e.g., to my surprise, Gemini infers gender of the user depending on the prompt/what the question is about; by extension it’ll have many other assumptions about race, nationality, political views, etc.

MaxLeitertoday at 4:18 AM

They still have bias. Not sure its necessarily worse but there is bias inherent to LLMs

https://misinforeview.hks.harvard.edu/article/do-language-mo...

show 1 reply