logoalt Hacker News

cdecl04/24/20251 replyview on HN

> LLMs are just text prediction. That's what they are.

This sort of glib talking point really doesn't pass muster, because if you showed the current state of affairs to a random developer from 2015, you would absolutely blow their damned socks off.


Replies

aredox04/25/2025

They would be blown off by the "Unreasonable Effectiveness of [text prediction]", but it is still text prediction.

That's the very root cause why we still have unsolved problems like the inability to get the same answer to the same questions, the inability to do riguorous maths or logic (any question that only has one good answer, in fact) and hallucinations!

show 1 reply