logoalt Hacker News

vladshyesterday at 10:27 PM7 repliesview on HN

LLMs get over-analyzed. They’re predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.

Agents, however, are products. They should have clear UX boundaries: show what context they’re using, communicate uncertainty, validate outputs where possible, and expose performance so users can understand when and why they fail.

IMO the real issue is that raw, general-purpose models were released directly to consumers. That normalized under-specified consumer products, created the expectation that users would interpret model behavior, define their own success criteria, and manually handle edge cases, sometimes with severe real world consequences.

I’m sure the market will fix itself with time, but I hope more people would know when not to use these half baked AGI “products”


Replies

DuperPoweryesterday at 11:54 PM

because they wanted to sell the illusion of consciousness, chatgpt, gemini and claude are humans simulator which is lame, I want autocomplete prediction not this personality and retention stuff which only makes the agents dumber.

andreyktoday at 3:24 AM

To say they LLMs are 'predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.' is not entirely accurate. Classic LLMs like GPT 3 , sure. But LLM-powered chatbots (ChatGPT, Claude - which is what this article is really about) go through much more than just predict-next-token training (RLHF, presumably now reasoning training, who knows what else).

nowittyusernametoday at 1:27 AM

You hit the nail on the head. Anyone who's been working intimately with LLM's comes to the same conclusion. the llm itself is only one small important part that is to be used in a more complicated and capable system. And that system will not have the same limitations as the raw llm itself.

adleyjulianyesterday at 11:02 PM

> LLMs get over-analyzed. They’re predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.

Per the predictive processing theory of mind, human brains are similarly predictive machines. "Psychology" is an emergent property.

I think it's overly dismissive to point to the fundamentals being simple, i.e. that it's a token prediction algorithm, when it's clear to everyone that it's the unexpected emergent properties of LLMs that everyone is interested in.

show 3 replies
baschyesterday at 11:34 PM

they are human in the sense they are reenforced to exhibit human like behavior, by humans. a human byproduct.

show 1 reply
more_corntoday at 2:13 AM

Sure, but they reflect all known human psychology because they’ve been trained on our writing. Look up the anthropic tests. If you make an agent based on an LLM it will display very human behaviors including aggressive attempts to prevent being shut down.

kcexntoday at 1:08 AM

A large part of that training is done by asking people if responses 'look right'.

It turns out that people are more likely to think a model is good when it kisses their ass than if it has a terrible personality. This is arguably a design flaw of the human brain.