> Like, what information is being taken into account to reach their conclusions? How are they reaching their conclusions? Is someone messing with the input to make the models lean in a certain direction?
I say this exact same thing every time I think about using an LLM.
It's pretty funny that the fact we've managed to get a computer to trick us into thinking it thinks without even understanding why it works is causing people to lose their minds.