logoalt Hacker News

rockskonyesterday at 5:26 PM2 repliesview on HN

Analogies of LLMs to humans obfuscates the problem. LLMs aren't like humans of any sort in any context. They're chat bots. They do not "think" like humans and applying human-like logic to them does not work.


Replies

not2byesterday at 6:23 PM

You're right, mostly, but the fact remains that the behavior we see is produced by training, and the training is driven by companies run by execs who like this kind of sycophancy. So it's certainly a factor. Humans are producing them, humans are deciding when the new model is good enough for release.

show 2 replies
Retricyesterday at 6:25 PM

It’s not about thinking, it’s about what they are trained to do. You could train a LLM to always respond to every prompt by repeating the prompt in Spanish, but that’s not the desired behavior.