This is oddly a case to signify there is value in an AI moderation tools - to avoid bias inherent to human actors.
They still have bias. Not sure its necessarily worse but there is bias inherent to LLMs
https://misinforeview.hks.harvard.edu/article/do-language-mo...
Getting rid of bias in LLM training is a major research problem and anecdotally e.g., to my surprise, Gemini infers gender of the user depending on the prompt/what the question is about; by extension it’ll have many other assumptions about race, nationality, political views, etc.