The problem is that biases tend to be built in via even rudimentary stuff like bad training material and biased tuning via system prompts. E.g., consider the 2026 X post experiment, where a user ran identical divorce scenarios through ChatGPT but swapped genders. When a man described his wife's infidelity and abuse, the AI advised restraint to avoid appearing "controlling/abusive." For a woman in the same situation, it encouraged immediately taking the kids and car for "protection."
The bot was trained on conservative bullshit. In this scenario, woman taking the advice would end up punished by court. And that happens even when there is documented history of domestic violence in play.