logoalt Hacker News

imiriclast Monday at 9:34 PM1 replyview on HN

LLMs don't "memorize" concepts like humans do. They generate output based on token patterns in their training data. So instead of having to be trained on every possible problem, they can still generate output that solves it by referencing the most probable combination of tokens for the specified input tokens. To humans this seems like they're truly solving novel problems, but it's merely a trick of statistics. These tools can reference and generate patterns that no human ever could. This is what makes them useful and powerful, but I would argue not intelligent.


Replies

IshKebablast Tuesday at 6:37 AM

> To humans this seems like they're truly solving novel problems

Because they are. This is some crazy semantic denial. I should stop engaging with this nonsense.

We have AI that is kind of close to passing the Turing test and people still say it's not intelligent...

show 2 replies