How long until LLMs are prompting LLMs to write a response to their user query?
Actually, this happens already in a modular way AFAIK…
This already happens with -- e.g. -- Claude Code spawnining parallel agents and then collating their results.
This already happens with -- e.g. -- Claude Code spawnining parallel agents and then collating their results.