I know it’s really important to write and vocalize one’s alignment with the values of the day, but I don’t think language models being structurally incapable of offending your favorite race/ethnicity/caste should be an objective of AI labs. Language models are just systems and I’m not sure why we think users are not responsible for how they use their outputs. For the same reasons, I don’t dismiss the utility pens as a tool of “racism” because maybe somebody could write a naughty word on a bathroom stall.
You probably live somewhere where harassment is a crime, right? Probably, there are speech codes, too? Isn’t that enough? Do we really need to orient every effort of every person on earth around ethical fashions that change every few years?
Never had a pen claim to be mecha hitler and constantly talk about white genocide for no reason but yeah great analogy
Elon Musk has manipulated Groks outputs to target certain demographics. It is important to highlight this fact, as some people perceive the AI as an objective tool rather than a curated one.
Furthermore, I found your final paragraph unclear: are you implying that since harassment is a perennial issue, we should disregard any standards that might mitigate it?
It's being biased on purpose. Musk has intervened multiple times when he believed Grok's responses were too "woke" or "leftist".
https://www.nytimes.com/2025/09/02/technology/elon-musk-grok...
In response to Grok saying that the "woke mind virus is often exaggerated" the prompt was tweaked so that Grok now says "The woke mind virus 'poses significant risks'"
If you truly believed in what your comment states then you would oppose this sort of editorializing. But somehow I doubt this is a sincere argument.
> but I don’t think language models being structurally incapable of offending your favorite race/ethnicity/caste should be an objective of AI labs.
The opposite should not be an objective either, and Elon has been very openly manipulating what grok says.