Seems like they skipped training of the me too movement
Fundamental flaw with LLMs. It's not that they aren't trained on the concept, it's just that in any given situation they can apply a greater bias to the antithesis of any subject. Of course, that's assuming the counter argument also exists in the training corpus.
I've always wondered what these flagship AI companies are doing behind the scenes to setup guardrails. Golden Gate Claude[1] was a really interesting... I haven't seen much additional research on the subject, at the least open-facing.
Seen some jokes about how the tech industry doesn't understand consent. It's not just this - it's also privacy invasion and update nags.