It's basically just a way for the LLM to lazy-load curated information, tools, and scripts into context. The benefit of making it a "standard" is that future generations of LLMs will be trained on this pattern specifically, and will get quite good at it.
> It's basically just a way for the LLM to lazy-load curated information, tools, and scripts into context.
So basically a reusable prompt like the previous has asked?
Does it persist the loaded information for the remainder of the conversation or does it intelligently cull the context when it's not needed?