logoalt Hacker News

zacksiri05/15/20255 repliesview on HN

I've been working on solving this with quite a bit of success, I'll be sharing more on this soon. It involves having 2 systems 1st system is the LLM itself and another system which acts like a 'curator' of thoughts you could say.

It dynamically swaps in / out portions of the context. This system is also not based on explicit definitions it relies on LLMs 'filling the gaps'. The system helps the llm break down problems into small tasks which then eventually aggregate into the full task.


Replies

simianwords05/15/2025

This is a great idea. What you are doing is a RAG over the chat.

In the future such a distinction in memory hierarchies will be more clear

- Primary memory in the training data

- Secondary memory in context

- Tertiary memory in RAG

cadamsdotcom05/15/2025

Sounds like an exciting idea.

May I suggest - put what you have out there in the world, even if it’s barely more than a couple of prompts. If people see it and improve on it, and it’s a good idea, it’ll get picked up & worked on by others - might even take on a life of its own!

show 1 reply
adrianm05/15/2025

This is a class of mental critic from the Emotion Machine.

adiadd05/15/2025

would be great to get more info on what you’re building - seems interesting!

show 1 reply
layer805/15/2025

So, Map-Reduce-of-Thought?

show 1 reply