> there’s one depressing anecdote that I keep on seeing: the junior engineer, empowered by some class of LLM tool, who deposits giant, untested PRs on their coworkers—or open source maintainers—and expects the “code review” process to handle the rest.
It's even worse than that: non-junior devs are doing it as well.
Unfortunately, junior behavior exists in many with "senior" titles. Especially since "senior" is often given to those 2 years out of school.
It’s always the developers who can break / bypass the rules who are the most dangerous.
I always think of the "superstars" or "10x" devs I have met at companies. Yeah I could put out a lot of features too if I could bypass all the rules and just puke out code / greenfield code that accounts for the initial one use case ... (and sometimes even leave the rest to other folks to clean up).
Where are the junior devs while their code is being reviewed? I'm not a software developer, but I'd be loath to review someone's work unless they have enough skin in the game to be present for the review.
>It's even worse than that: non-junior devs are doing it as well.
This might be unpopular, but that is seeming more like an opportunity if we want to continue allowing AI to generate code.
One of the annoying things engineers have to deal with is stopping whatever they're doing and doing a review. Obviously this gets worse if more total code is being produced.
We could eliminate that interruption by having someone doing more thorough code reviews, full-time. Someone who is not being bound by sprint deadlines and tempted to gloss over reviews to get back to their own work. Someone who has time to pull down the branch and actually run the code and lightly test things from an engineer's perspective so QA doesn't hit super obvious issues. They can also be the gatekeeper for code quality and PR quality.
This mirrors my experience with the texting while driving problem: The debate started as angry complaints about how all the kids are texting while driving. Yet it’s a common problem for people of all ages. The worst offender I knew for years was in her 50s, but even she would get angry about the youths texting while driving.
Pretending it’s just the kids and young people doing the bad thing makes the outrage easier to sell to adults.
it's even worse than that! non-devs are doing it as well
Even worse for me, some of my coworkers were doing that _before_ coding LLMs were a thing. Now LLMs are allowing them to create MRs with untested nonsense even faster which feels like a DDOS attack on my productivity.
As always, this requires nuance. Just yesterday and today, I did exactly that to my direct reports (I'm director-level). We had gotten a bug report, and the team had collectively looked into it and believed it was not our problem, but that of an external vendor. Reported it to the vendor, who looked into it, tested it, and then pushed back and said it was our problem. My team is still more LLM-averse than me, so I had Codex look at it, and it believed it found the problem and prepared the PR. I did not review or test the PR myself, but instead assigned it to the team to validate, partly for learnings. They looked it over and agreed it was a valid fix for a problem on our side. I believe that process was better than me just fully validating it myself, and part of the process toward encouraging them to use LLM as a tool for their work.
One thing I've pushed developers on my team to do since way before AI slop became a thing was to review their own PR. Go through the PR diff and leave comments in places where it feels like a little explanation of your thought process could be helpful in the review. It's a bit like rubber duck debugging, I've seen plenty of things get caught that way.
As an upside, it helps with AI slop too. Because as I see it, what you're doing when you use an LLM is becoming a code reviewer. So you need to actually read the code and review it! If you have not reviewed it yourself first, I am not going to waste my time reviewing it for you.
It helps obviously that I'm on a small team of a half dozen developers and I'm the lead, and management hasn't even hinted at giving us stupid decrees like "now that you have Claude Code you can do 10x as many features!!!1!".
There’s folks who perform like juniors but have just been in the business long enough to be promoted.
Title only loosely tracks skill level and with AI, that may become even more true.
Juniors aren't even the problem here, they can and should be taught better thats the point.
Its when your PEERS do it that its a huge problem.
There’s a PR on a project I contribute to that is as bad/big as some PRs by problematic coworkers. I’m not saying it’s AI work, but I’m wondering.
> expects the “code review” process to handle the rest.
The LLMs/agents have actually been doing a stellar job with code reviews. Frankly that’s one area that humans rush through, to the point it’s a running joke that the best way to get a PR granted a “lgtm” is to make it huge. I’ve almost never seen Copilot wave a PR through on the first attempt, but I usually see humans doing that.
In the company I’m at this is beginning to happen. PM’s want to “prototype” new features and expect the engineers to finish up the work. With the expectation that it ‘just needs some polishing’. What would be your recommendation on how to handle this constructively? Flat out rejecting LLM as a prototyping tool is not an option.
Not at Meta - their job is to "Move fast and break things". I think people are just doing what they've been told.
Don’t accept PRs without test coverage? I mean, LLMs can do those also, but it’s something.
Code review is an unfunded mandate. It is something the company demands while not really doing anything make sure people get rewarded for doing it.
Just shove a code review agent in the middle. Problem solved
[Edit] man, people dont get /s unless its explicit
Really good code reviewing AIs could handle this!
Quality is not rewarded at most companies, it's not going to turn into more money, it might turn into less work later, but in all likelihood, the author won't be around to reap the benefits of less work later because they will have moved onto another company.
On the contrary, since more effort doesn't yield more money, but less effort can yield the same money, the strategy is to contract the time spent on work to the smallest amount, LLMs are currently the best way to do that.
I don't see why this has to be framed as a bad thing. Why should anyone care about the quality of software that they don't use? If you wouldn't work on it unless you were paid to, and you can leave if and when it becomes a problem, then why spend mental energy writing even a single line?
Yeah, it is way worse than that. In the past two days, I have had two separate non-engineer team members ask some AI agent how some mobile bug should be fixed and posted the AI response in the ticket as the main content and context and acceptance criteria. I then had to waste my time reading this crap (because this is really all that is in the ticket) before starting my own efforts to understand what the real ask or change in behavior needed is.