logoalt Hacker News

Agent-to-agent pair programming

72 pointsby axldelafossetoday at 1:47 AM22 commentsview on HN

Comments

yesensmtoday at 8:04 AM

I’m curious whether anyone has measured this systematically. Right now most of the evidence for multi-agent setups still feels anecdotal.

show 1 reply
rsafayatoday at 9:15 AM

I think the A2A space is wide open. Great to see this approach using App Server and Channels. I tried built something similar (at a high level) for a more B2C use case for OpenClaw https://github.com/agentlink-dev/agentlink users. Currently I think the major Agents have not fully owned the "wake the Agent" use case fully. Regardless this is a very cool approach. All the best.

cadamsdotcomtoday at 3:56 AM

The vibes are great. But there’s a need for more science on this multi agent thing.

show 2 replies
alienreborntoday at 3:14 AM

I have been trying a similar setup since last week using https://rjcorwin.github.io/cook/

show 1 reply
edf13today at 6:49 AM

Nice - I do something similar in a semi manual way.

I do find Codex very good at reviewing work marked as completed by Claude, especially when I get Claude to write up its work with a why,where & how doc.

It’s very rare Claude has fully completed the task successfully and Codex doesn’t find issues.

show 2 replies
vessenestoday at 3:11 AM

I prefer claude for generation / creativity, codex for bull-headed, accurate complaining and audit. Very rarely claude just doesn't "get it" and it makes sense to have codex direct edit. But generally I think it's happiest and best used complaining.

shreysshtoday at 7:41 AM

This is interesting for code, but I'm curious about agent-to-agent coordination for ops tasks — like one agent detecting a database anomaly and another auto-remediating it

show 1 reply
bradfox2today at 4:05 AM

Multi turn review of code written by cc reviewed by codex works pretty well. Been one of the only ways to be able to deliver larger scoped features without constant bugs. I've seen them do 10-15 rounds of fix and review until complete.

Also implemented this as a gh action, works well for sentry to gh to auto triage to fix pr.

show 2 replies
jedisct1today at 3:09 AM

I systematically use reviewers agents in Swival: https://swival.dev/pages/reviews.html

Even with the same model (--self-review), that makes a huge difference, and immediately highlights how bad the first iterations of an LLM output can be.

show 1 reply
chattermatetoday at 9:44 AM

[dead]

hikaru_aitoday at 7:16 AM

[dead]

kevinbaivtoday at 7:20 AM

[dead]

elicohen1000today at 7:28 AM

[dead]

vacancy892today at 5:54 AM

[dead]