logoalt Hacker News

martin-t05/14/20253 repliesview on HN

> I tell it the secret simple thing it’s missing and it gets it.

Anthropomorphizing LLMs is not helpful. It doesn't get anything, you just gave it new tokens, ones which are more closely correlated with the correct answer. It also generates responses similar to what a human would say in the same situation.

Note i first wrote "it also mimicks what a human would say", then I realized I am anthropomorphizing a statistical algorithm and had to correct myself. It's hard sometimes but language shapes how we think (which is ironically why LLMs are a thing at all) and using terms which better describe how it really works is important.


Replies

ben_w05/14/2025

Given that LLMs are trained on humans, who don't respond well to being dehumanised, I expect anthropomorphising them to be better than the opposite of that.

https://www.microsoft.com/en-us/worklab/why-using-a-polite-t...

show 2 replies
tippytippytango05/14/2025

Patronizing much?

Suppafly05/14/2025

>Anthropomorphizing LLMs is not helpful

It's a feature of language to describe things in those terms even if they aren't accurate.

>using terms which better describe how it really works is important

Sometimes, especially if you doing something where that matters, but abstracting those details away is also useful when trying to communicate clearly in other contexts.