Really interested to understand how the AI keeps rebaselining back to the topic in hand and doesn't end up getting confused the more it has in its context window.
Did it just essentially create one big plan and spawn different agents to execute them, so acted as an orchestrator?
Even the orchestrator would have to detect when it is starting to stray off task and restart itself.
Probably part of the "secret sauce" in the harnesses and prompts developed by this lab to create their eventual marketable product.
But also, like, normal hierarchical memory management.