logoalt Hacker News

gpmtoday at 4:54 AM2 repliesview on HN

When I've reached for stacked PRs (in the past, not using this feature) it's precisely because I've split my change into smaller PRs being merged individually.

I've just written those smaller PRs at once, or in quick enough succession that the previous PRs weren't merged before the later ones were ready. And the later ones relied on the previous ones because that's how working on a feature works.

The earlier PRs are absolutely reviewable and testable without relying on the later ones. The later ones are just treating the earlier ones as part of the codebase. I.e. everything here looks like two different PRs except the timing.

An obvious example would be "implement API for a feature" and then "implement UI that uses that API". Two different PRs. The second fundamentally relies on the first.


Replies

conor_ftoday at 10:29 AM

This is a perfect example that I've often seen in practice. There's nothing blocking in this workflow at all, and no reason these changes cannot be made in independent changes. e.g.

1) API implementation - Including tests and docs this should be perfectly acceptable to merge and review independently 2) UX implementation - Feature flagged, dummy API responses, easy to merge + review 3) One quick "glue" PR where the feature can be integration tested etc

This prevents awful merge conflicts, multiple rounds of increasingly complex stacked reviews, and a host of other annoyances.

Is there any reason that the stacked PR workflow is better that I'm ignoring or overlooking?

show 1 reply
scaryclamtoday at 9:18 AM

I do this as well, but there is a workflow problem to solve and that is: getting PRs merged when they need to be to continue working.

It's not a simple problem to solve, we can't all just jump because someone finished some work after all. But if the PRs are OK to rubber stamp, and merge, and they're safely behind a feature flag, then it could just be as simple as letting the submitter merge without the need for an extra review. That can of course be contentious, but then we can ask "why not?" and figure out what non-human gateways need to be added to help make it possible etc.

I'm finding myself increasingly interested in understanding what friction can be removed from the software review, merge and release process, without sacrificing safe, well tested, understandable code that follows good standards.