logoalt Hacker News

Show HN: I built a tool to assist AI agents to know when a PR is good to go

14 pointsby dsifrytoday at 9:55 AM9 commentsview on HN

I've been using Claude Code heavily, and kept hitting the same issue: the agent would push changes, respond to reviews, wait for CI... but never really know when it was done.

It would poll CI in loops. Miss actionable comments buried among 15 CodeRabbit suggestions. Or declare victory while threads were still unresolved.

The core problem: no deterministic way for an agent to know a PR is ready to merge.

So I built gtg (Good To Go). One command, one answer:

$ gtg 123 OK PR #123: READY CI: success (5/5 passed) Threads: 3/3 resolved

It aggregates CI status, classifies review comments (actionable vs. noise), and tracks thread resolution. Returns JSON for agents or human-readable text.

The comment classification is the interesting part — it understands CodeRabbit severity markers, Greptile patterns, Claude's blocking/approval language. "Critical: SQL injection" gets flagged; "Nice refactor!" doesn't.

MIT licensed, pure Python. I use this daily in a larger agent orchestration system — would love feedback from others building similar workflows.


Comments

rootnod3today at 5:12 PM

Sorry, so the tool is now even circumventing human review? Is that the goal?

So the agent can now merge shit by itself?

Just the let damn thing push nto prod by itself at this point.

show 4 replies
joshuanapolitoday at 5:47 PM

This looks nice! I like the idea of providing more deterministic feedback and more or less forcing the assistant to follow a particular development process. Do you have evidence that gtg improves the overall workflow? I think that there is a trade-off between risk of getting stuck (iteration without reaching gtg-green) versus reaching perfect 100% completion.

mcolleytoday at 10:45 AM

Super interesting, any particular reason you didn't try to solve these prior to pushing with hooks and subagents?

show 1 reply