logoalt Hacker News

AI Lazyslop and Personal Responsibility

57 pointsby dshackeryesterday at 7:56 PM67 commentsview on HN

Comments

solomonbyesterday at 8:30 PM

> After I “Requested changes” he’d get frustrated that I’d do that, and put all his changes in an already approved PR and sneak merge it in another PR.

This is outrageous regardless of AI. Clearly there are process and technical barriers that failed in order to even make this possible. How does one commit a huge chunk of new code to an approved PR and not trigger a re-review?

But more importantly, in what world does a human think it is okay to be sneaky like this? Being able to communicate and trust one another is essential to the large scale collaboration we participate in as professional engineers. Violating that trust erodes all ability to maintain an effective team.

show 4 replies
dkarlyesterday at 8:13 PM

I have no idea what AI changes about this scenario. It's the same scenario as when Mike did this with 1600 lines of his own code ten years ago; it just happens more often, since Mike comes up with 1600 lines of code in a day instead of in a sprint.

> I don’t blame Mike, I blame the system that forced him to do this.

Bending over backwards not to be the meanie is pointless. You're trying to stop him because the system doesn't really reward this kind of behavior, and you'll do Mike a favor if you help him understand that.

show 5 replies
xyzsparetimexyzyesterday at 8:05 PM

If you get a 1600 line PR you just close it and ask them to break it up into reviewable chunks. If your workplace has an issue with that, quit. This was true before AI and will be true after AI.

show 5 replies
augusteotoday at 2:17 AM

Large, hard-to-review PRs existed long before AI. The fix is the same: reject them, ask for smaller chunks. If your team doesn't have the culture to do that, AI just accelerates the dysfunction that was already there.

The teams I've seen struggle with this usually have a review bottleneck problem. One or two people doing all the reviews, so they wave things through. AI didn't cause that.

babblingfishyesterday at 8:30 PM

This is consistent with my own observations of LLM-generated code increasing the burden on reviewers. You either review the code carefully, putting more effort into it than the actual original author. Or you approve it without careful review. I feel like the latter is becoming more common. This is basically creating tech debt that will only be realized later by future maintainers

show 2 replies
krzysz00yesterday at 8:28 PM

This does seem to align decently well with, for example, the policy the LLVM project recently adopted https://llvm.org/docs/AIToolPolicy.html , which allows for AI but requires a human in the loop that understands the code and allows for fast closure of "extractive" PRs that are mainly a timesink for reviewers where the author doesn't seem to be quite sure what's going on.

yesitcanyesterday at 8:29 PM

> why do I need tests? It works already

> I don't blame Mike

You should blame Mike.

colinmilhauptyesterday at 8:17 PM

Love to see the responsible use disclosure. I did the same several months back. https://colinmilhaupt.com/posts/responsible-llm-use/

Also love the points during review! Transparency is key to understanding critical thinking when integrating LLM-assisted coding tools.

dmmartinsyesterday at 8:17 PM

> What was your thought process using AI? > Share your prompts! Share your process! It helps me understand your rationale.

why? does it matter? do you ask the same questions for people that don't use AI? I don't like using AI for code because I don't like the code it generates and having to go over and over until I like it, but I don't care how people write code. I review the code that's on the PR and if there's I don't understand/agree, I comment on the PR

other than the 1600 lines PR that's hard to view, it feels that the author just want to be in the way and control everything other people are doing

show 4 replies
mrkeenyesterday at 8:43 PM

> Mike sent me a 1600 line pull-request with no tests, entirely written by AI, and expected me to approve it immediately as to not to block him on his deployment schedule.

Both Mike and the manager are cargo-culting the PR process too. Code review is what you do when you believe it's worth losing velocity in order for code to pass through the bottleneck of two human brains instead of one.

LLMs are about gaining velocity by passing less code through human brains.

ghm2199yesterday at 8:35 PM

If you work in a company where some kind of testing is optional to get your PR merged, run in the opposite direction. Because testing showed you your engineer _thought_ things through. Its communicating what the intended use and many times when well written is as clarifying as documentation. I would be even willing to accept integration/manual tests if writing unit tests is sometimes not possible.

serial_devyesterday at 8:36 PM

> put all his changes in an already approved PR and sneak merge it in another PR. I don’t blame Mike, I blame the system that forced him to do this.

Oh you should definitely blame Mike for this. It’s like blaming the system when someone in the kitchen spits in the food of customer. Working with people like this is horrible because you know they don’t mind to lie cheat deceive.

fnoefyesterday at 8:27 PM

While I agree with the sentiment of the post, I’ve also came to a conclusion that it’s not worth to fight against the system. If you can’t quit your job, then just do what everyone else is doing: use AI to write and review code, and make sure everyone is happy (especially the management).

Ozzie_osmanyesterday at 8:38 PM

I call it L-ai-ziness and I try to reduce it on my team.

If it has your name on it, you're accountable for it to your peers. If it has our name on it as a company, we're accountable for it to our users. AI doesn't change that.

show 1 reply
serial_devyesterday at 8:50 PM

Lazyslop PRs offload the work to code reviewers while keeping all the benefits to the PR creator.

Now creating a 1600 loc PR is about ten minutes, reviewing it at the very least an hour. Mike submits a bunch of PRs, the rest of the team tries to review it to prevent the slop from causing an outage at night or blowing up the app. Mike is a hero, he really embraced AI, he leveraged it to get 100x productivity.

This works for a while, until everyone realizes that Mike gets the praise, they get reprimanded for not shipping their features fast enough. After a couple of these sour experiences, other developers will follow suit, embrace the slop. Now there is nobody that stops the train wreck. The ones who really cared, left, the ones who cared at least a little gave up, and churn out slop.

dragoman1993yesterday at 8:06 PM

At the end there's a typo "catched" should be caught.

Otherwise, agree-ish. There should be business practices in place for responsible AI use to avoid coworkers having to suffer from bad usage.

show 3 replies
firasdyesterday at 8:39 PM

Unfortunately the list of AI edits this person declares at the bottom of their post is self-refuting

If you use AI as a Heads-up Display you can't make a giant scroll of every text change you accepted.

throwawaysleepyesterday at 8:05 PM

> Then, I’d get a ping from his manager asking on why am I blocking the review.

If you are in a culture like this, you may as well just ship slop.

Management wants to break stuff, that is on them.

show 5 replies
epolanskiyesterday at 8:18 PM

Pointless blog post about made up situations that never happened.

1. Companies that push and value slop velocity do not have all these bureaucratic merge policies. They change them or relax them, and a manager would just accept it without needing to ping the author.

2. If the author was on the high paladin horse of valuing the craft he would not be working in such a place. Or he would be half assing slop too while concentrating on writing proper code for his own projects like most of us do when we end in bs jobs.

show 3 replies