This assumes our models perfectly model the world, which I don't think is true. I mean, we straight up know it's not true - we tell models what they can and can't say.
“we tell models what they can and can't say.”
Thus introducing our worldly our biases
“we tell models what they can and can't say.”
Thus introducing our worldly our biases