logoalt Hacker News

mdanieltoday at 2:01 AM1 replyview on HN

My hypothesis of the mismatch is centered around "read" - I think that when you wrote it, and when others similarly think about that scenario, the surprise is because our version of "read" is the implied "read and internalized" or at bare minimum "read for comprehension" but as very best I can tell the LLM's version is "encoded tokens into vector space" and not "encoded into semantic graph"

I welcome the hair-splittery that is sure to follow about what it means to "understand" anything


Replies

8n4vidtmkvmktoday at 6:38 AM

That's the point, isn't it? The missing link. AIs can't yet truly comprehend, or internalize, or whatever you want to call it. That's probably equivalent to AGI or singularity. We're not there yet. Feeding copious amounts of data into existing architecture won't get us there either.

A human with all that data, if it could fit in their brain, would likely come up with something interesting. Even then... I'm not entirely sure it's so simple. I'd wager most of us have enough knowledge in our brains today to come up with something if we applied ourselves, but ideas don't spontaneously appear just because the knowledge is there.

What if we take our AI models and force them to continuously try making connections between unlikely things? The novel stuff is likely in the parts that don't already have strong connections because research is lacking but could. But how would it evaluate what's interesting?