logoalt Hacker News

utopiahyesterday at 1:51 PM1 replyview on HN

And because ALL the marketing AND UX around LLMs is precisely trying to imply that they are thinking. It's not just the challenge of grasping the ridiculous amount of resources poured in, which does including training sets, it's because actual people are PAID to convince everybody those tools are actually thinking. The prompt is a chatbox, the "..." are there like a chat with a human, the "thinking" word is used, the "reasoning" word is used, "hallucination" is used, etc.

All marketing.


Replies

xanderlewistoday at 9:18 AM

You're right. Unfortunately, it seems that not many are willing to admit this and be (rightly) impressed by how remarkably effective LLMs can be, at least for manipulating language.