logoalt Hacker News

Towards a science of scaling agent systems: When and why agent systems work

47 pointsby gmaysyesterday at 6:00 PM20 commentsview on HN

Comments

0xbadcafebeetoday at 2:29 AM

> Conversely, on tasks requiring strict sequential reasoning (like planning in PlanCraft), every multi-agent variant we tested degraded performance by 39-70%. In these scenarios, the overhead of communication fragmented the reasoning process, leaving insufficient "cognitive budget" for the actual task.

> As tasks require more tools (e.g., a coding agent with access to 16+ tools), the "tax" of coordinating multiple agents increases disproportionately.

This aligns well the principle of highly cohesive, loosely coupled design for software components. If you instruct the AI to design this way, it should result in components that're simpler to reason about, and require fewer sequential steps to work on. You can think of cohesion in many different ways, but one is common functions, and another is tool/library dependency.

withtoday at 2:25 AM

It's true that most problems can be solved with context + prompt. I have actively seen teams within large organizations complicate it into complex "agentic orchestration" just to impress leadership who lack the expertise to realize it's not even necessary. Hell, there are various startups who make this their moat.

Good for promo projects though, lol

zkmonyesterday at 9:46 PM

> We found that independent multi-agent systems (agents working in parallel without talking) amplified errors by 17.2x

The paper sounds too shallow. The errors data doesn't seem to have a rationale or correlation against the architecture. Specifically, what makes the SAS architecture to have lowest error rates while the similar architecture with independent agents having highest error rates? The conclusion doesn't seem well-grounded with reasoning.

Falimondatoday at 1:05 AM

I've been building something in this space ("Clink" - multi-agent coordination layer) and this research confirms some of the assumptions that motivated the project. You can't just throw more agents at a problem and expect it to get better.

The error amplification numbers are wild! 17x for independent agents vs 4x with some central coordination. Clink provides users (and more importantly their agents) the primitives to choose their own pattern.

The most relevant features are...

- work queues with claim/release for parallelizable tasks - checkpoint dependencies when things need to be sequential - consensus voting as a gate before anything critical happens

The part about tool count increasing coordination overhead is interesting too. I've been considering exposing just a single tool to address this, but I wonder how this plays out as people start stacking more MCP servers together. It feels like we're all still learning what works here. The docs are at https://docs.clink.voxos.ai if anyone wants to poke around!

localghost3000yesterday at 8:34 PM

I’ve been building a lot of agent workflows at my day job. Something that I’ve found a lot of success with when deciding on an orchestration strategy is to ask the agent what they recommend as part of the planning for phase. This technique of using the agent to help you improve its performance has been a game changer for me in leveraging this tech effectively. YMMV of course. I mostly use Claude code so who knows with the others.

kiokuyesterday at 11:52 PM

I found the captions on Figure 1 quite interesting.

> Average performance (%) across four agentic benchmarks improves consistently with increasing model Intelligence Index.

> Centralized and hybrid coordination generally yield superior scaling efficiency, suggesting that collaborative agentic structures amplify capability gains more effectively than individual scaling alone.

Then again, the deltas between SAS and best performing MAS approach are ~8%, so I can't help wonder if it's worth the extra cost, at least for the generation of models that was studied.

CuriouslyCyesterday at 7:53 PM

This is a neat idea but there are so many variables here that it's hard to make generalizations.

Empirically, a top level orchestrator that calls out to a planning committee, then generates a task-dag from the plan which gets orchestrated in parallel where possible is the thing I've seen put in the best results in various heterogeneous environments. As models evolve, crosstalk may become less of a liability.

show 1 reply
pevansgreenwoodyesterday at 9:29 PM

[dead]

detroitwebsitesyesterday at 9:12 PM

[flagged]

show 2 replies
verdvermyesterday at 7:32 PM

gonna read this with a grain of salt because I have been rather unimpressed with Google's Ai products, save direct API calls to gemini

The rest is trash they are forcing down our throats

show 1 reply