logoalt Hacker News

riskableyesterday at 2:58 PM1 replyview on HN

Context size helps some things but generally speaking, it just slows everything down. Instead of huge contexts, what we need is actual reasoning.

I predict that in the next two to five years we're going to see a breakthrough in AI that doesn't involve LLMs but makes them 10x more effective at reasoning and completely eliminates the hallucination problem.

We currently have "high thinking" models that double and triple-check their own output and we call that "reasoning" but that's not really what it's doing. It's just passing its own output through itself a few times and hoping that it catches mistakes. It kind of works, but it's very slow and takes a lot more resources.

What we need instead is a reasoning model that can be called upon to perform logic-based tests on LLM output or even better, before the output is generated (if that's even possible—not sure if it is).

My guess is that it'll end up something like a "logic-trained" model instead of a "shitloads of raw data trained" model. Imagine a couple terabytes of truth statements like, "rabbits are mammals" and "mammals have mammary glands." Then, whenever the LLM wants to generate output suggesting someone put rocks on pizza, it fails the internal truth check, "rocks are not edible by humans" or even better, "rocks are not suitable as a pizza topping" which it had placed into the training data set as a result of regression testing.

Over time, such a "logic model" would grow and grow—just like a human mind—until it did a pretty good job at reasoning.


Replies

2snakesyesterday at 4:24 PM

Wasn’t this idea the basic premise of coq? Why didn’t it work?