logoalt Hacker News

K0baltyesterday at 11:36 AM1 replyview on HN

Exactly this. Their characteristics are by design constrained to be as human-like as possible, and optimized for human-like behavior. It makes perfect sense to characterize them in human terms and to attribute human-like traits to their human-like behavior.

Of course, they are -not humans, but the language and concepts developed around human nature is the set of semantics that most closely applies, with some LLM specific traits added on.


Replies

K0baltyesterday at 10:41 PM

I’d love to hear an actual counterpoint, perhaps there is an alternative set of semantics that closely maps to LLMs, because “text prediction” paradigms fail to adequately intuit the behavior of these devices, while anthropomorphic language is a blunt crudgle but gets in the ballpark, at least.

If you stop comparing LLMs to the professional class and start comparing them to marginalized or low performing humans, it hits different. It’s an interesting thought experiment. I’ve met a lot of people that are less interesting to talk to than a solid 12b finetune, and would have a lot less utility for most kinds of white collar work than any recent SOTA model.