logoalt Hacker News

AdamH12113yesterday at 4:57 PM1 replyview on HN

Anthropomorphizing LLMs is something that happens in the design stage, when they're given human names and trained to emit first-person sentences. If AI companies and developers stop anthropomorphizing them, users won't be misled in the first place.


Replies