logoalt Hacker News

anon84873628today at 7:02 PM3 repliesview on HN

I'm skeptical of LLM "reasoning" but they sure as hell know a lot. That's what the embeddings are: a giant semantic relationship between concepts.


Replies

WorldMakertoday at 7:59 PM

Embeddings are still mostly just vectors into n-dimensional K-means clusters. It isn't "knowing" two things are related and here's the evidence, it is guessing two things are statistically likely to be related, based on trained patterns, and running with it without evidence.

It has no "semantic understanding" as we would define it. It's just increasingly good at winning cluster lotteries because we've increased the amount of training data to incredible heights.

wiseowisetoday at 7:44 PM

Encyclopedia and Wikipedia know a lot too. Knowledge isn't much of use on its own, it's about how you use it.

koonsolotoday at 7:22 PM

I agree with you, but a big drawback is that the accuracy or confidence of their output can't be estimated.

So they surely know a lot, but you are never sure if the info is correct or not.