logoalt Hacker News

sreanlast Sunday at 4:31 PM1 replyview on HN

That's not correct. Even a toy like an exponential weighted moving averaging produces unbounded context (of diminishing influence).


Replies

empikolast Sunday at 5:05 PM

What do you mean? I can only input k tokens into my LLM to calculate the probs. That is the definition of my state. In the exact way that N-gram LMs use N tokens, but instead of using ML models, they calculate the probabilities based on observed frequencies. There is no unbounded context anywhere.

show 1 reply