here's an example of how model censorship affects coding tasks: https://github.com/orgs/community/discussions/72603
You conversely get the same issue if you have no guardrails. Ie: Grok generating CP makes it completely unusable in a professional setting. I don't think this is a solvable problem.
These gender reveal parties are getting ridicolous.
I can't believe I'm using Grok... but I'm using Grok...
Why? I have a female sales person, and I noticed they get a different response from (female) receptionists than my male sales people. I asked chatGPT about this, and it outright refused to believe me. It said I was imagining this and implied I was sexist or something. I ended up asking Grok, and it mentioned the phenomena and some solutions. It was genuinely helpful.
Further, I brought this up with some of my contract advisors, and one of my female advisors mentioned the phenomena before I gave a hypothesis. 'Girls are just like this.'
Now I use Grok... I can't believe I'm saying that. I just want right answers.
Oh, lol. This though seems to be something that would affect only US models... ironically