I tried my hand at coding with multiple agents at the same time recently. I had to add related logic to 4 different repos. Basically an action would traverse all of them, one by one, carrying some data. I decided to implement the change in all of them at the same time with 4 Claude Code instances and it worked the first time.
It's crazy how good coding agents have become. Sometimes I barely even need to read the code because it's so reliable and I've developed a kind of sense for when I can trust it.
It boggles my mind how accurate it is when you give it the full necessary context. It's more accurate than any living being could possibly be. It's like it's pulling the optimal code directly from the fabric of the universe.
It's kind of scary to think that there might be AI as capable as this applied to things besides next token prediction... Such AI could probably exert an extreme degree of control over society and over individual minds.
I understand why people think we live in a simulation. It feels like the capability is there.
Conway’s law still applies.
Good architecture, actor models, and collaboration patterns do not emerge magically from “more agents”.
Maybe what’s missing is the architect’s role.
To be honest humans often have no overview over a application either. We navigate up and down the namespace, building the "overview" as we go. Nothing i see what prevents an agent from moving up and down that namespace, writing his assumptions into the codebase and requesting feedback from other agents working on different sets of the file.
The thing that TFA doesn't seem to go into is that these mathematical results apply to human agents in exactly the same way as they do to AI agents, and nevertheless we have massive codebases like Linux. If people can figure out how to do it, then there's no math that can help you prove that AIs can't.
I run a small team of AI agents building a product together. One agent acts as supervisor — reviews PRs, resolves conflicts, keeps everyone aligned. It works at this scale (3-4 agents) because the supervisor can hold the full context. But I can already see the bottleneck — the supervisor becomes the single point of coordination, same as a human tech lead. The distributed systems framing makes sense. What I'm not sure about is whether the answer is a new formal language, or just better tooling around the patterns human teams already use (code review, specs, tests).
Doesn't this whole argument fall apart if we consider iteration over time? Sure, the initial implementation might be uncoordinated, but once the subagents have implemented it, what stops the main agent from reviewing the code and sorting out any inconsistencies, ultimately arriving at a solution faster than it could if it wrote it by itself?
It’s not a solution but it’s why humans have developed the obvious approach of “build one thing, then everyone can see that one thing and agree what needs to happen next” (ie the space of P solutions is reduced by creating one thing and then the next set of choices is reduced by the original Choice.
This might be obvious to everyone but it’s a nice way to me to view it (sort of restating the non-waterfall (agile?) approach to specification discovery)
Ie waterfall design without coding is too under specified, hence the agile waterfall of using code iteratively to find an exact specification
After you point this out, it is obviously right!
Makes sense. Coordination between multiple agents feels like the real challenge rather than just building them.
Well it starts with agree list. I don't agree next gen models will be smarter. I would argue no real improvement in models in last couple of years just improvement in stability and tools (agentic ones) around it.
No shit sherlock.
[dead]
The fundamental assumptions of distributed systems is having multiple machines that fail independently, communicate over unreliable networks and have no shared clock has the consequence of needing to solve consensus, byzantine faults, ordering, consistency vs. availability and exactly-once delivery.
However, AI agents don't share these problems in the classical sense. Building agents is about context attention, relevance, and information density inside a single ordered buffer. The distributed part is creating an orchestrator that manages these things. At noetive.io we currently work on the context relevance part with our contextual broker Semantik.