logoalt Hacker News

KoolKat23last Monday at 11:15 PM1 replyview on HN

Given a random prompt, the overall probability of seeing a specific output string is almost zero, since there are astronomically many possible token sequences.

The same goes for humans. Most awards are built on novel research built on pre-existing works. This a LLM is capable of doing.


Replies

adamzwassermanlast Monday at 11:22 PM

LLMs don't use 'overall probability' in any meaningful sense. During training, gradient descent creates highly concentrated 'gravity wells' of correlated token relationships - the probability distribution is extremely non-uniform, heavily weighted toward patterns seen in training data. The model isn't selecting from 'astronomically many possible sequences' with equal probability; it's navigating pre-carved channels in high-dimensional space. That's fundamentally different from novel discovery.

show 1 reply