logoalt Hacker News

0xbadcafebeetoday at 2:34 AM1 replyview on HN

Yep. It's the guy from the movie "Memento" doing your physics homework on a couple pages of legal paper. When he runs out of paper, he has to write a post-it note summarizing it all, then burn the papers, and his memory resets. You can only do so much with that.

If we can crack long memory we're most of the way there. But you need RL in addition to long memory or the model doesn't improve. Part of the genius of humans is their adaptability. Show them how to make coffee with one coffee machine, they adapt to pretty much every other coffee machine; that's not just memory, that's RL. (Or a simpler example: crows are more capable of learning and acting with memory than an LLM is)

Currently the only way around both of these is brute-force (take in RL input from users/experiments, re-train the models constantly), and that's both very slow and error-prone (the flaws in models' thinking comes from lack of high-quality RL inputs). So without two major breakthoughs we're stuck tweaking what we got.


Replies

takwatanabetoday at 3:18 AM

The coffee machine example is interesting. That's procedural memory in neuroscience. You don't memorize each machine. You abstract the steps. Grind, filter, add grounds, pour water. Then you adapt to any machine.

LLMs can't form procedural memory on their own. But you can build it outside the model. Store abstracted procedures, inject them when needed. That's closer to how the brain actually works than trying to retrain the model every time.