logoalt Hacker News

Kim_Bruningtoday at 1:31 AM0 repliesview on HN

> and you'll blow the context over time and send to the LLM sanitorium. It doesn't fit like the human brain can.

The LLM did have this capability at training time, but weights are frozen at inference time. This is a big weakness in current transformer architectures.