logoalt Hacker News

xanderlewisyesterday at 7:19 AM1 replyview on HN

> I don't get why you would say that.

Because it's hard to imagine the sheer volume of data it's been trained on.


Replies

utopiahyesterday at 1:51 PM

And because ALL the marketing AND UX around LLMs is precisely trying to imply that they are thinking. It's not just the challenge of grasping the ridiculous amount of resources poured in, which does including training sets, it's because actual people are PAID to convince everybody those tools are actually thinking. The prompt is a chatbox, the "..." are there like a chat with a human, the "thinking" word is used, the "reasoning" word is used, "hallucination" is used, etc.

All marketing.

show 1 reply