logoalt Hacker News

wk_endyesterday at 8:58 PM1 replyview on HN

Around a year ago I started a new position at a very large tech company that I won't name, working on a pre-existing web project there. The code base isn't terrible - though not very good either, by-and-large - but it's absolutely massive, often over-engineered, pretty unorthodox, and definitely has some questionable design decisions; even after more than a year of working with it I still feel like a beginner much of the time.

This year I grudgingly bit the bullet and began using AI tools, and to my dismay they've been a pretty big boon for me, in this case. Not just for code generation - they're really good at probing the monolith and answering questions I have about how it works. Before I'd spend days pouring over code before starting work to figure out the right way to build something or where to break in, pinging people over in India or eastern Europe with questions and hoping they reply to me overnight. AI's totally replaced that, and it works shockingly well.

When I do fall back on it for code generation, it's mostly just to mitigate the tedium of writing boilerplate. The code it produces tends to be pretty poor - both in terms of style and robustness - and I'll usually need to take at least a couple of passes over it to get it up to snuff. I do find this faster than writing everything out by hand in the end, but not by a lot.

For my personal projects I don't find it adds much, but I do enjoy rubber ducking with ChatGPT.


Replies

abcde666777yesterday at 10:39 PM

Using these tools for understanding seems to be one of the best use cases - lots of pros, less dangerous cons (worst case scenario is a misleading understanding, but that can be minimized by directly double checking the claims being made).

In fact it looks like an arising theme is that whenever we use these tools it's valuable to maintain a human understanding of what's actually going on.