logoalt Hacker News

Every layer of review makes you 10x slower

322 pointsby greyface-today at 3:20 AM181 commentsview on HN

Comments

onion2ktoday at 5:17 AM

But you can’t just not review things!

Actually you can. If you shift the reviews far to the left, and call them code design sessions instead, and you raise problems on dailys, and you pair programme through the gnarly bits, then 90% of what people think a review should find goes away. The expectation that you'll discover bugs and architecture and design problems doesn't exist if you've already agreed with the team what you're going to build. The remain 10% of things like var naming, whitespace, and patterns can be checked with a linter instead of a person. If you can get the team to that level you can stop doing code reviews.

You also need to build a team that you can trust to write the code you agreed you'd write, but if your reviews are there to check someone has done their job well enough then you have bigger problems.

show 8 replies
thot_experimenttoday at 5:07 AM

Valve is one of the only companies that appears to understand this, as well as that individual productivity is almost always limited by communication bandwidth, and communication burden is exponential as nodes in the tree/mesh grow linearly. [or some derated exponent since it doesn't need to be fully connected]

lelanthrantoday at 4:42 AM

I wonder where the reviewer worked where PRs are addressed in 5 hours. IME it's measured in units of days, not hours.

I agree with him anyway: if every dev felt comfortable hitting a stop button to fix a bug then reviewing might not be needed.

The reality is that any individual dev will get dinged for not meeting a release objective.

show 3 replies
swiftcodertoday at 10:14 AM

> The job of a code reviewer isn't to review code. It's to figure out how to obsolete their code review comment, that whole class of comment, in all future cases, until you don't need their reviews at all anymore

Amen brother

yasontoday at 9:35 AM

One thing that often gets dismissed is the value/effort ratio of reviews.

A review must be useful and the time spent on reviewing, re-editing, and re-reviewing must improve the quality enough to warrant the time spent on it. Even long and strict reviews are worth it if they actually produce near bugless code.

In reality, that's rarely the case. Too often, reviewing gets down into the rabbithole of various minutiae and the time spent to gain the mutual compromise between what the programmer wants to ship and the reviewer can agree to pass is not worth the effort. The time would be better spent on something else if the process doesn't yield substantiable quality. Iterating a review over and over and over to hone it into one interpretation of perfection will only bump the change into the next 10x bracket in the wallclock timeline mentioned in this article.

In the adage of "first make it work, then make it correct, and then make it fast" a review only needs to require that the change reaches the first step or, in other words, to prevent breaking something or the development going into an obviously wrong direction straight from the start. If the change works, maybe with caveats but still works, then all is generally fine enough that the change can be improved in follow-up commits. For this, the review doesn't need to be thorough details: a few comments to point the change into the right direction is often enough. That kind of reviews are very efficient use of time.

Overall, in most cases a review should be a very short part of the development process. Most of the time should be spent programming and not in review churn. A review serves as a quick check-point that things are still going the right way but it shouldn't dictate the exact path that should be used in order to get there.

pu_petoday at 8:45 AM

Nice piece, and rings true. I also think startups and smaller organizations will be able to capture better value out of AI because they simply don't have all those approval layers.

alkonauttoday at 8:55 AM

I think this makes an assumption early on which is that things are serialized, when usually they are not.

If I complete a bugfix every 30 minutes, and submit them all for review, then I really don't care whether the review completes 5 hours later. By that time I have fixed 10 more bugs!

Sure, getting review feedback 5 hours later will force me to context switch back to 10 bugs ago and try to remember what that was about, and that might mean spending a few more minutes than necessary. But that time was going to be spent _anyway_ on that bug, even if the review had happened instantly.

The key to keeping speed up in slow async communication is just working on N things at the same time.

trigvitoday at 8:55 AM

Excellent article. Based on personal experience, if you build cutting edge stuff then you need great engineers and reviewers.

But for anything else, you just need an individual (not a team) who's okay (not great) at multiple things (architecting, coding, communicating, keeping costs down, testing their stuff). Let them build and operate something from start to finish without reviewing. Judge it by how well their produce works.

tptacektoday at 4:31 AM

Not before coding agents nor after coding agents has any PR taken me 5 hours to review. Is the delay here coordination/communication issues, the "Mythical Mammoth" stuff? I could buy that.

show 5 replies
TheChelsUKtoday at 9:38 AM

That’s because most teams are doing engineering wrong.

The handover to a peer for review is a falsehood. PRs were designed for open source projects to gate keep public contributors.

Teams should be doing trunk-based development, group/mob programming and one piece flow.

Speed is only one measure and AI is pushing this further to an extreme with the volume of change and more code.

The quality aspect is missing here.

Speed without quality is a fallacy and it will haunt us.

Don’t focus on speed alone, and the need to always be busy and picking up the next item - focus on quality and throughput keeping work in progress to a minimum (1). Deliver meaningful reasoned changed as a team, together.

zingartoday at 11:22 AM

This is a profound point but is review really the problem or is it the handoff that crosses boundaries (me to others, our team to other team, our org to outside our org)?

ChrisMarshallNYtoday at 9:32 AM

Communication overhead is the #1 schedule killer, in my experience.

Whenever we have to talk/write about our work, it slows things down. Code reviews, design reviews, status updates, etc. all impact progress.

In many cases, they are vital, and can’t be eliminated, but they can be streamlined. People get really hung up on tools and development dogma, but I've found that there’s no substitute for having experienced, trained, invested, technically-competent people involved. The more they already know, the less we have to communicate.

That’s a big reason that I have for preferring small meetings. I think limiting participants to direct technical members, is really important. I also don’t like regularly-scheduled meetings (like standups). Every meeting should be ad hoc, in my opinion.

Of course, I spent a majority of my career, at a Japanese company, where meetings are a currency, so fewer meetings is sort of my Shangri-La.

I’m currently working on a rewrite of an app that I originally worked on, for nearly four years. It’s been out for two years, and has been fairly successful. During that time, we have done a lot of incremental improvements. It’s time for a 2.0 rewrite.

I’ve been working on it for a couple of months, with LLM assistance, and the speed has been astounding. I’m probably halfway through it, already. But I have also been working primarily alone, on the backend and model. The design and requirements are stable and well-established. I know pretty much exactly what needs to be done. Much of my time is spent testing LLM output, and prompting rework. I’m the “review slowdown,” but the results would be disastrous, if I didn’t do it.

It’s a very modular design, with loosely-coupled, well-tested and documented components, allowing me to concentrate on the “sharp end.” I’ve worked this way for decades, and it’s a proven technique.

Once I start working on the GUI, I guarantee that the brakes will start smoking. All because of the need for non-technical stakeholder team involvement. They have to be involved, and their involvement will make a huge difference (like a Graphic UX Designer), but it will still slow things down. I have developed ways to streamline the process, though, like using TestFlight, way earlier than most teams.

rainmakingtoday at 10:09 AM

That's exactly why I think vibecoding uniquely benefits solo and small team founders. For anything bigger, work is not the bottleneck, it's someone's lack of imagination.

https://capocasa.dev/the-golden-age-of-those-who-can-pull-it...

show 1 reply
gebalamariusztoday at 11:26 AM

Well, this all makes sense for application code, but not necessarily for infrastructure changes. Imagine a failed Terraform merge that deletes the production database but opens the inbound at 0.0.0.0/0, and you can't undo it for 10 minutes. In my opinion, you need to pay attention to the narrow scope specific to a given project.

show 1 reply
presentationtoday at 9:12 AM

I broadly agree with this, it really is all about trust. Just, as a company scales it’s hard to make sure that everybody in the team remains trustworthy – it isn’t just about personality and culture, it’s also about people actually having the skill, motivation, and track record of doing good work efficiently. Maybe AI‘s greatest value will be to allow teams to stay small, which reduces the difficulty of maintaining trust.

lukaslalinskytoday at 7:38 AM

Reviewing things is fast and smooth is things are small. If you have all the involved parties stay in the loop, review happens in the real time. Review is only problematic if you split the do and review steps. The same applies to AI coding, you can chose to pair program with it and then it's actually helpful, or you can have it generate 10k lines of code you have no way of reviewing. You just need people understand that switching context is killing productivity. If more things are happening at the same time and your memory is limited, the time spent on load/save makes it slower than just doing one thing at the time and staying in the loop.

show 1 reply
abtinftoday at 4:39 AM

I find to be true for expensive approvals as well.

If I can approve something without review, it’s instant. If it requires only immediate manager, it takes a day. Second level takes at least ten days. Third level trivially takes at least a quarter (at least two if approaching the end of the fiscal year). And the largest proposals I’ve pushed through at large companies, going up through the CEO, take over a year.

dominicrosetoday at 9:30 AM

Managers are expected to say that we should be productive yet they're responsible for the framework which slows down everyone and it's quite clear that they're perfectly fine with this framework. I'm not saying it's good or bad because it's complicated.

show 1 reply
superlopuhtoday at 7:47 AM

In my experience a culture where teammates prioritise review times (both by checking on updates in GH a few times a day, and by splitting changes agressively into smaller patches) is reflected in much faster overall progress time. It's definitely a culture thing, there's nothing technically or organisationally difficult about implementing it, it just requires people working together considering team velocity more important than personal velocity.

show 1 reply
wei03288today at 9:51 AM

The 10x estimate tracks — I've seen it too. The underlying mechanism is queuing theory: each approval step is a single-server queue with high variance inter-arrival times, so average wait explodes non-linearly. AI makes the coding step ~10x faster but doesn't touch the approval queue. The orgs winning right now are the ones treating async review latency as a first-class engineering metric, same way they treat p99 latency for services.

riffrafftoday at 6:04 AM

> Code a simple bug fix 30 minutes

> Get it code reviewed by the peer next to you 300 minutes → 5 hours → half a day

Is it takes 5 hours for a peer to review a simple bugfix your operation is dysfunctional.

show 2 replies
codemogtoday at 7:11 AM

This reads like a scattered mind with a few good gems, a few assumptions that are incorrect but baked into the author’s world view, and loose coherence tying it all together. I see a lot of myself in it.

I’ll cover one of them: layers of management or bureaucracy does not reduce risk. It creates in-action, which gives the appearance of reducing risk, until some startup comes and gobbles up your lunch. Upper management knows it’s all bullshit and the game theoretic play is to say no to things, because you’re not held accountable if you say no, so they say no and milk the money printer until the company stagnates and dies. Then they repeat at another company (usually with a new title and promotion).

afctoday at 8:55 AM

Waiting for a few days of design review is a pain that is easy to avoid: all we need is to be ready to spend a few months building a potentially useless system.

p0w3n3dtoday at 5:08 AM

Meanwhile there are people who, as we speak, say that AI will do review and all we need to do is to provide quality gates...

show 1 reply
simianwordstoday at 7:46 AM

I don’t agree that AI can’t fix this. It is too easy to dismiss.

With AI my task to review is to see high level design choices and forget reviewing low level details. It’s much simpler.

janpmztoday at 9:32 AM

A lot of this goes away when the person who builds also decides what to build.

show 1 reply
orwintoday at 10:11 AM

> Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself.

That's me. I'm the mad reviewer. Each time I ranted against AI on this site, it was after reviewing sloppy code.

Yes, Claude Opus is better on average than my juniors/new hires. But it will do the same mistakes twice. I _need_ you to fucking review your own generated code and catch the obvious issues before you submit it to me. Please.

rigorclawtoday at 11:40 AM

The descent into madness cycle isn't new to AI. Better IDEs, better frameworks, better languages — each one sped up step 1 and the bottleneck moved further downstream. AI just makes the contrast so extreme that the review bottleneck becomes impossible to ignore.

Maybe that's the real contribution of AI coding: it finally makes the actual problem visible.

halotoday at 7:58 AM

In my experience, good mature organisations have clear review processes to ensure quality, improve collaboration and reduce errors and risk. This is regardless of field. It does slow you down - not 10x - but the benefits outweigh the downsides in the long run.

The worst places I’ve worked have a pattern where someone senior drives a major change without any oversight, review or understanding causing multiple ongoing issues. This problem then gets dumped onto more junior colleagues, at which point it becomes harder and more time consuming to fix (“technical debt”). The senior role then boasts about their successful agile delivery to their superiors who don’t have visibility of the issues, much to the eye-rolls of all the people dealing with the constant problems.

DeathArrowtoday at 8:33 AM

I totally agree with his ideas, but somehow he seems just stating the obvious: startups move better than big orgs and you can solve a problem by dividing it in smaller problems - if possible. And that AI experimentation is cheap.

usr1106today at 6:22 AM

What makes me slower is the moment is the AI slop my team lead posts into reviews. I have to spend time to argue why that's not a valid comment.

sublineartoday at 4:40 AM

As they say: an hour of planning saves ten hours of doing.

You don't need so much code or maintenance work if you get better requirements upfront. I'd much rather implement things at the last minute knowing what I'm doing than cave in to the usual incompetent middle manager demands of "starting now to show progress". There's your actual problem.

show 1 reply
camillomillertoday at 6:53 AM

>> Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself. Little of value was gained.

This seems to check out, and it's the reason why I can't reconcile the claims of the industry about workers replacement with reality. I still wonder when a reckoning will come, though. seems long overdue in the current environment

entrustaitoday at 9:23 AM

[dead]

ychompinatortoday at 8:09 AM

[dead]

jtbetz22today at 11:20 AM

[flagged]

show 2 replies
PunchyHamstertoday at 8:42 AM

> I know what you're thinking. Come on, 10x? That’s a lot. It’s unfathomable. Surely we’re exaggerating.

See this rarely known trick! You can be up to 9x more efficient if you code something else when you wait for review

> AI

projectile vomits

Fuck engineering, let's work on methods to make artificial retard be more efficient!

show 1 reply
nfw2today at 8:52 AM

from article:

1. Whoa, I produced this prototype so fast! I have super powers!

2. This prototype is getting buggy. I’ll tell the AI to fix the bugs.

3. Hmm, every change now causes as many new bugs as it fixes.

4. Aha! But if I have an AI agent also review the code, it can find its own bugs!

5. Wait, why am I personally passing data back and forth between agents

6. I need an agent framework

7. I can have my agent write an agent framework!

8. Return to step 1

the author seems to imply this is recursive when it isn't. when you have an effective agent framework you can ship more high quality code quickly.

show 1 reply
simonwtoday at 4:42 AM

This is one of the reasons I'm so interested in sandboxing. A great way to reduce the need for review is to have ways of running code that limit the blast radius if the code is bad. Running code in a sandbox can mean that the worst that can happen is a bad output as opposed to a memory leak, security hole or worse.

show 2 replies
jbrozena22today at 4:46 AM

I think the problem is the shape of review processes. People higher up in the corporate food chain are needed to give approval on things. These people also have to manage enormous teams with their own complexities. Getting on their schedule is difficult, and giving you a decision isn't their top priority, slowing down time to market for everything.

So we will need to extract the decision making responsibility from people management and let the Decision maker be exclusively focused on reviewing inputs, approving or rejecting. Under an SLA.

My hypothesis is that the future of work in tech will be a series of these input/output queue reviewers. It's going to be really boring I think. Probably like how it's boring being a factory robot monitor.

markbaotoday at 4:39 AM

If you save 3 hours building something with agentic engineering and that PR sits in review for the same 30 hours or whatever it would have spent sitting in review if you handwrote it, you’re still saving 3 hours building that thing.

So in that extra time, you can now stack more PRs that still have a 30 hour review time and have more overall throughput (good lord, we better get used to doing more code review)

This doesn’t work if you spend 3 minutes prompting and 27 minutes cleaning up code that would have taken 30 minutes to write anyway, as the article details, but that’s a different failure case imo

show 3 replies