These lessons get obliterated with every new LLM generation. Like how LangChain started on stupid models with small context, creating some crazy architecture around it to bypass their limitations that got completely obliterated when GPT-3.5 was released, yet people still use it and overcomplicate things. Rather look at where the puck is going, we might soon not need more than a single agent to do everything given context size keeps increasing, agent can use more tools and we might get some in-call context cleanup at some point as well that would allow an agent to spin forever instead of calling subagents due to context size limitations.
It’ll all be a ClaudeVM. No code. https://jperla.com/blog/claude-electron-not-claudevm
I'm trying to include patterns that work independently of model releases.
It's tricky though. Take "red/green TDD" for example - it's perfectly possible that models will start defaulting to doing that anyway pretty soon.
In that case it's only three words so it doesn't feel hugely wasteful if it turns out not to be necessary - and there's still value in understanding what it means even if you no longer have to explicitly tell the agents to do it.