logoalt Hacker News

stavrosyesterday at 11:04 AM1 replyview on HN

Nope, it's not that, but it's nice of you to offer a straw man. Makes the argument flow better.


Replies

datsci_est_2015yesterday at 11:35 AM

Not entirely a straw man. What is the purpose of storing and retrieving LLMs at a fixed state if not to guarantee a specific performance? Wouldn’t a strong model of intelligence be capable of, to extend your analogy, running without having its hippocampus lobotomized?

Given the precariousness of managing LLM context windows, I don’t think it’s particularly unfair to assume that LLMs that learn without limit become very unstable.

To steelman, if it’s possible, it may be prohibitively expensive. But somehow I doubt it’s possible.

show 2 replies