logoalt Hacker News

ivapetoday at 7:12 AM0 repliesview on HN

in order for that lifetime companion, we'll need to make a leap in agentic memory.

Well, let’s take your life. Your life is about 3 billion seconds (100 year life). That’s just 3 billion next-tokens. The thing you do on second N is just, as a whole, a next token. If next-token prediction can be scaled up such that we redefine a token from a part of language to an entire discrete event or action, then it won’t be hard for the model to just know what you will think and do … next. Memory in that case is just the next possible recall of a specific memory, or next possible action, and so on. It doesn’t actually need all the memory information, it just needs to know that that you will seek a specific memory next.

Why would it need your entire database of memories if it already knows you will be looking for one exact memory next? The only thing that could explode the computational cost of this is if dynamic inputs fuck with your next token prediction. For example, you must now absolutely think about a Pink Elephant. But even that is constrained in our material world (still bounded physically, as the world can’t transfer that much information through your senses physically).

A human life up to this exact moment is just a series of tokens, believe it or not. We know it for a fact because we’re bounded by time. The thing you just thought was an entire world snapshot that’s no longer here, just like an LLM output. We have not yet trained a model on human lives yet, just knowledge.

We’re not done with the bitter lesson.