> LLMs are just text prediction. That's what they are.
This sort of glib talking point really doesn't pass muster, because if you showed the current state of affairs to a random developer from 2015, you would absolutely blow their damned socks off.
They would be blown off by the "Unreasonable Effectiveness of [text prediction]", but it is still text prediction.
That's the very root cause why we still have unsolved problems like the inability to get the same answer to the same questions, the inability to do riguorous maths or logic (any question that only has one good answer, in fact) and hallucinations!
They would be blown off by the "Unreasonable Effectiveness of [text prediction]", but it is still text prediction.
That's the very root cause why we still have unsolved problems like the inability to get the same answer to the same questions, the inability to do riguorous maths or logic (any question that only has one good answer, in fact) and hallucinations!