Well, it's all about linguistic relativism, right? If you can define "user harm" in terms of things it does understand, I think you could get something that works
The idea that language influences the world view isn't new, it was speculated upon long before artificial intelligence was a thing, but it explicitely speculates about having an influence on the world view of humans. It doesn't postulate that language itself creates a worldview in whatever system processes text. Or else books would have a worldview.
It's a categeory error to apply it to an LLM. Language works on humans, because we share a common experience as humans, it's not just a logical description of thoughts, it's also an arrangement of symbols that stand for experiences a human can have. That why humans are able to empathically experience a story, because it triggers much more than just rational thought inside their brains.
The idea that language influences the world view isn't new, it was speculated upon long before artificial intelligence was a thing, but it explicitely speculates about having an influence on the world view of humans. It doesn't postulate that language itself creates a worldview in whatever system processes text. Or else books would have a worldview.
It's a categeory error to apply it to an LLM. Language works on humans, because we share a common experience as humans, it's not just a logical description of thoughts, it's also an arrangement of symbols that stand for experiences a human can have. That why humans are able to empathically experience a story, because it triggers much more than just rational thought inside their brains.