logoalt Hacker News

garganzoltoday at 6:09 PM0 repliesview on HN

And after all "safeguards" applied, the model becomes useless. It starts to suspect gender discrimination, racism, etc. everywhere without any grounded evidence or discernment.

For example, I used ChatGPT model for risk assessment of anonymized ecommerce orders. Initially, it performed well. But after a later update, it stopped cooperating and instead raised concerns about applying statistical analysis to gender-related variables - despite the data being anonymized and the task being legitimate.

This is on the same level of hypocrisy as if a C compiler would accuse me of choosing "he"/"she"/"they" variable names.