logoalt Hacker News

dvt04/23/20252 repliesview on HN

Yes, but this should be trivially done with an internal `MEMORY` tool the LLM calls. I know that the context can't grow infinitely, but this shouldn't prevent filling the context with relevant info when discussing topic A (even a lazy RAG approach should work).


Replies

otabdeveloper404/24/2025

What you're describing is just RAG, and it doesn't work that well. (You need a search engine for RAG, and the ideal search engine is an LLM with infinite context. But the only way to scale LLM context is by using RAG. We have infinite recursion here.)

nthingtohide04/23/2025

You are asking for a feature like this. Future advances will help in this.

https://youtu.be/ZUZT4x-detM