logoalt Hacker News

mcnyyesterday at 10:26 AM2 repliesview on HN

I would like to see the day when the context size is in gigabytes or tens of billions of tokens, not RAG or whatever, actual context.


Replies

riskableyesterday at 2:58 PM

Context size helps some things but generally speaking, it just slows everything down. Instead of huge contexts, what we need is actual reasoning.

I predict that in the next two to five years we're going to see a breakthrough in AI that doesn't involve LLMs but makes them 10x more effective at reasoning and completely eliminates the hallucination problem.

We currently have "high thinking" models that double and triple-check their own output and we call that "reasoning" but that's not really what it's doing. It's just passing its own output through itself a few times and hoping that it catches mistakes. It kind of works, but it's very slow and takes a lot more resources.

What we need instead is a reasoning model that can be called upon to perform logic-based tests on LLM output or even better, before the output is generated (if that's even possible—not sure if it is).

My guess is that it'll end up something like a "logic-trained" model instead of a "shitloads of raw data trained" model. Imagine a couple terabytes of truth statements like, "rabbits are mammals" and "mammals have mammary glands." Then, whenever the LLM wants to generate output suggesting someone put rocks on pizza, it fails the internal truth check, "rocks are not edible by humans" or even better, "rocks are not suitable as a pizza topping" which it had placed into the training data set as a result of regression testing.

Over time, such a "logic model" would grow and grow—just like a human mind—until it did a pretty good job at reasoning.

show 1 reply
lelanthranyesterday at 12:57 PM

> I would like to see the day when the context size is in gigabytes or tens of billions of tokens, not RAG or whatever, actual context.

Might not make a difference. I believe we are already at the point of negative returns - doubling context from 800k tokens to 1600k tokens loses a larger percentage of context than halving it from 800k tokens to 400k tokens.