Which goes on to prove that bottleneck isn't in writing the code. It is in reading and understanding the code.
We all had that one "productive" engineer in our teams who would write huge PRs that would have large swaths of refactoring whether warranted or not and that was way before anyone even could imagine in their wildest dreams that neural networks could generate that huge amounts of code.
The net effect of such a "productive" engineer always was that instead of increasing the team velocity, team would come to a crawling pace because either his PR had to be reviewed in detail eating up all the time and/or if you just did cursory LGTM then they blew up in production meanwhile forcing everyone back to the drawing board but project architecture would have shifted so rapidly due to his "productivity" that no one had a clear picture of the codebase such as what's where except that one "super smart talented productive loyal to the company goals" guy.
I was (almost) just that guy for one PR. Removed something like 20% or more of the codebase by leveraging the libraries and external tools we already had in use better, but it meant almost every single thing we were doing had to use the library function instead of the one we wrote. But assuming you have good regression tests and linters, so you know the code works and it's not terrible, the review should be more about overall high level quality instead of poring over every character to check correctness. It was still a pain to review, though
"[...] bottleneck isn't in writing the code. It is in reading and understanding the code". 100% agreed! Furthermore, the more code is generated by AI, the fewer people will actually understand it!
With AI everybody gets to be “that guy” now.
Without AI, both writing and reading code are bottlenecks.
How many times have you reviewed your old code and been appalled at the terrible quality? You personally created slop; it's no different from GenAI output except that a human had to spend precious time crafting it. You likely were indeed bottlenecked by your ability to churn out code that you just had to get to work, for one reason or another.
The real issue is in the asymmetry when one party can use automation to create more code than another party can possibly manually verify.
I don't understand why one wouldn't just auto reject big PRs and tell them to make smaller ones. Sounds like it's a communication and social problem, not a technological one.
Even with AI, just tell it to make smaller self contained PRs. I do this with Claude or GPT models and they do just fine.
Context is everything for massive PRs.
If you don't ever have a massive PR from a dynamite session, then you cannot ever be better than "average and plodding". So the question is, what's the context of the massive PR and how should it be handled?
* Mature product making money, intermediate engineer just refactored everything so it's "better"? Shut the fuck up, kindly please, you will have to demonstrate that you understand why things are this way and why it's better before we even have this conversation.
* Greenfield dev, trusted engineer getting from 0 -> 1 on something big? Maybe it shouldn't be held up in committee for 2 weeks. Maybe most objections will be superficial stylistic concerns.
Obviously there are many other contexts and these are 2 extremes in a multi-dimensional space. But if the process is "we litigate every line", then that's just not an innovative place to be. Yes, most PRs should be small, targeted, easy to review and tied to a ticket but if you're innovating? By definition it's a little different.
That guy is now running twenty agents in parallel and really scaling up his wonderful impact.
Which goes on to prove that bottleneck isn't in writing the code. It is in reading and understanding the code.
So all we have to do is write code without reading or understanding it! Larry Wall was right all along!
The reality is somewhere in the middle. Features are shipping 2x to 5x faster at a lot of organizations, with solid code still being produced and reviewed.
Anyone trying to suggest that AI hasn't sped up quality code production is just insisting on keeping their head in the sand, IMO.
Sounds a like a tactical tornado, made me think of this paragraph:
“Almost every software development organization has at least one developer who takes tactical programming to the extreme: a tactical tornado. The tactical tornado is a prolific programmer who pumps out code far faster than others but works in a totally tactical fashion. When it comes to implementing a quick feature, nobody gets it done faster than the tactical tornado. In some organizations, management treats tactical tornadoes as heroes. However, tactical tornadoes leave behind a wake of destruction. They are rarely considered heroes by the engineers who must work with their code in the future. Typically, other engineers must clean up the messes left behind by the tactical tornado, which makes it appear that those engineers (who are the real heroes) are making slower progress than the tactical tornado.” - John Ousterhout, A Philosophy of Software Design