I just can't stop thinking though about the vulnerability of training data
You say good enough. Great, but what if I as a malicious person were to just make a bunch of internet pages containing things that are blatantly wrong, to trick LLMs?
The internet has already tried this, for about a few decades. The garbage is in the corpus; it gets weighted as such
The internet has already tried this, for about a few decades. The garbage is in the corpus; it gets weighted as such