logoalt Hacker News

tayo42today at 2:19 AM1 replyview on HN

>Especially for LLMs, they are not (till now) learning on the fly.

Was this just awkward phrasing or did something change and they learn after training?


Replies

Dusseldorftoday at 3:07 AM

There have been several projects lately attempting to create running context/memory, and Claude Code also has some concept of continuous conversational memory, but all if these are bolted at inference time, there's still no concept of conversations feeding back into base model training/weights on the fly.