logoalt Hacker News

kbrkbrtoday at 3:17 PM1 replyview on HN

An LLM generates plausible text token by token. It is at its core a deterministic function with some randomization and some clever tricks to make it look like an agent dialoguing or reasoning.

Plausible text sometimes is right, sometimes not.

Humans have a world model, a model of what happens. LLMs have a model of what humans would plausibly say.

The only good guardrail seems human-in-the-loop.


Replies

armada651today at 4:21 PM

This is such a motte-and-bailey argument. Whenever people point out LLMs aren't actually intelligent then you're an anti-AI Luddite. But whenever an AI does something catastrophically dumb it's absolved of all responsibility because "it's just predicting the next token".

I'm getting so tired of this.

show 1 reply