logoalt Hacker News

troupolast Wednesday at 5:08 PM1 replyview on HN

I get by because I also have long-term memory, and experience, and I can learn. LLMs have none of that, and every new session is rebuilding the world anew.

And even my short-term memory is significantly larger than the at most 50% of the 200k-token context window that Claude has. It runs out of context before my short-term memory is probably not even 1% full, for the same task (and I'm capable of more context-switching in the meantime).

And so even the "Opus 4.5 really is at a new tier" runs into the very same limitations all models have been running into since the beginning.


Replies

scotty79last Wednesday at 5:25 PM

> LLMs have none of that, and every new session is rebuilding the world anew.

For LLMs long term memory is achieved by tooling. Which you discounted in your previous comments.

You also overstimate capacity of your short-term memory by few orders of magnitude:

https://my.clevelandclinic.org/health/articles/short-term-me...

show 1 reply