I have no idea what this does or is. I really wish they could have given a better description of why this is useful.
This is not how I've seen the term meta-harness be used. The common usage I've seen has been for a meta-harness to be a wrapper around an existing agent to give that agent a new ui or abilities.
I did this too, ablating all the components in my coding agent harness. The insight from my meta-optimization loops was "have judge agents review the plan and implementation".
One of my own insights here is that you need to collect not just execution traces, but all the human-in-the-loop nudges and steering commands. They are one of the purest sources of feedback on coding agents when seen in context.
I agree with OP on the need to collect traces and compare them, not just scores. It is a much richer source of feedback.
If anyone is interested I have a slide deck about my approach: https://horiacristescu.github.io/claude-playbook-plugin/docs...
This seems to be another over optimization for AI that many are trying to get into. The LLM's improve, and your setup is deprecated, you wasted time optimizing for a slight edge. TDLR: You trade time for slight edge.
serious question: I've already got a opencode harness running on a local model. It's easily installable via the insecure bash command. It's already tailored with a couple of plugins and with a proper TODO.md and planning, I can get it to loop fine with proper attention to its pratfalls on vague/non-determinant language. It's all running on a AMD 395+ Qwen3-Coder-Next model with ~256k context. opencode has a webui I can put behind a password protected endpoint and keep it busy from anywhere I need to via a simple nginx proxy.
How does this go above and beyond this straightforward opensource, open weights and relatively cheap setup? Do you just get more tokens from SOTA models? Can anyone rationally say the products of token production are quality and secure?
It has now become fashionable to dress oneself in the garb of science to sell dev environments ... for agents.
It has now become fashionable to claim much, and furnish little.
It has now become fashionable to fail to understand or state the core of your proposal in as few words as possible: instead of "genetic algorithm applied to the space of harnesses, parallelized by our infrastructure" we get "Three swaps. Same orchestrator. Same dashboard. The wiring is the thing."
We're cooked chat.