logoalt Hacker News

kelnostoday at 7:16 AM2 repliesview on HN

That's not learning, though. That's just taking new information and stacking it on top of the trained model. And that new information consumes space in the context window. So sure, it can "learn" a limited number of things, but once you wipe context, that new information is gone. You can keep loading that "memory" back in, but before too long you'll have too little context left to do anything useful.

That kind of capability is not going to lead to AGI, not even close.


Replies

regularfrytoday at 12:07 PM

Two things:

1. It's still memory, of a sort, which is learning, of a sort. 2. It's a very short hop from "I have a stack of documents" to "I have some LoRA weights." You can already see that happening.

show 1 reply
charcircuittoday at 8:42 AM

>but before too long you'll have too little context left to do anything useful.

One of the biggest boosts in LLM utility and knowledge was hooking them up to search engines. Giving them the ability to query a gigantic bank of information already has made them much more useful. The idea that it can't similarly maintain its own set of information is shortsighted in my opinion.

show 1 reply