As someone who has switched to exclusively coded using AI after 30 years of coding by myself, I find it really weird when people take credit for the lines of code ad features that AI generates. Flexing that one "coded" tens of hundreds of thousands of lines per day is a bit cringe, seeing as though it's really just the prompt that one typed.
Meta apparently now has a "leaderboard" for who is using the most AI - consuming the most tokens. Must make Anthropic happy, since Meta is using Claude, and accounts for some significant percentage (10%? 20%?) of their total volume.
Yes!
I don't mind it so much when it's a newbie or non-techie who has never actually written code before, because bless their hearts, they did it! They got some code working!
But if you've been developing for decades, you know that counting lines of code means nothing, less than nothing. That you could probably achieve the same result in half the lines if you thought about it a bit longer.
And to claim this as an achievement when it's LLM-generated... that's not a boast. That doesn't mean what you think it means.
But I guess we hit the same old problem that we've always had - how do you measure productivity in software development? If you wanted to boast about how an LLM is making you 100x more productive, what metric could you use? LOC is the most easily measurable, really, really, terrible measure that PMs have been using since we started doing this, because everything else is hard.
If anything couldn’t huge amounts of code changes or LoC be a sign of a poor outcome?
Some argue, LoC is irrelevant as a quality/complexity metric as (in this new software product development lifecycle) implementation + testing + maintainance is wholly overseen by agents.
It has never been possible to code & deploy software with all but specs. Whatever software Garry is building are products he couldn't otherwise. LoC, in that context, serves as a reminder of the capabilities of the agents to power/slog through reqs/specs (quite incredibly so).
Besides, critical human review can always be fed back as instructions to agents.
It's a spectrum, isn't it? From targeted edits that you approve manually - which I think you can reasonably take credit for - all the way to full blown vibe-coded apps where you're hardly involved in the design process at all.
And then there's this awkward bit in the middle where you're not necessarily reviewing all the code the AI generates, but you're the one driving the architecture, coming up with feature ideas, pushing for refactors from reading the code, etc. This is where I'm at currently and it's tricky, because while I'd never say that I "wrote" the code, I feel I can claim credit for the app as a whole because I was so heavily involved in the process. The end result I feel is similar to what I would've produced by hand, it just happened a lot faster.
(granted, the end result is only 2000 LoC after a few weeks working on and off)