logoalt Hacker News

cdatayesterday at 9:57 PM2 repliesview on HN

AI has pushed me to arrive at an epiphany: new technology is good if it helps me spend more time doing things that I enjoy doing; it's bad if it doesn't; it's worse if I end up spending more time doing things that I don't enjoy.

AI has increased the sheer volume of code we are producing per hour (and probably also the amount of energy spent per unit of code). But, it hasn't spared me or anyone I know the cost of testing, reviewing or refining that code.

Speaking for myself, writing code was always the most fun part of the job. I get a dopamine hit when CI is green, sure, but my heart sinks a bit every time I'm assigned to review a 5K+ loc mountain of AI slop (and it has been happening a lot lately).


Replies

wvenableyesterday at 11:22 PM

I've been using it to do big refactors are large changes that I would simply avoid because, before, the benefits don't outweigh the costs of the doing it. I think half the problem people have is just using AI for the wrong stuff.

I don't see why it doesn't help with reviewing, testing, or refining code either. One of the advantages I find is that an LLM "thinks" differently from me so it'll find issues that I don't notice or maybe even know about. I've certainly had it develop entire test harnesses to ensure pre/post refactoring results are the same.

That said, I have "held it wrong" and had it done the fun stuff instead and that felt bad. So I just changed how I used it.

show 1 reply
Yodel0914yesterday at 10:31 PM

I agree. I’m using copilot more and more as it gets better and better, but it is getting better at the fun stuff and leaves me to do the less fun stuff. I’m in a role where I need to review code across multiple teams, and as their output is increasing, so is my review load. The biggest issue is that the people who lean on copilot the most are the least skilled at writing/reviewing code in the first place, so not only do I have more to review, it’s worse(1).

My medium term concern is that the tasks where we want a human in the loop (esp review) are predicated on skills that come from actually writing code. If LLMs stagnate, in a generation we’re not going to have anyone who grew up writing code.

1: not that LLMs write objectively bad code, but it doesn’t follow our standards and patterns. Like, we have an internal library of common UI components and CSS, but the LLM will pump out custom stuff.

There is some stuff that we can pick up with analysers and fail the build, but a lot of things just come down to taste and corporate knowledge.