One of my frustrations with AI, and one of the reasons I've settled into a tab-complete based usage of it for a lot of things, is precisely that the style of code it uses in the language I'm using puts out a lot of things I consider errors based on the "middle-of-the-road" code style that it has picked up from all the code it has ingested. For instance, I use a policy of "if you don't create invalid data, you won't have to deal with invalid data" [1], but I have to fight the AI on that all the time because it is a routine mistake programmers make and it makes the same mistake repeatedly. I have to fight the AI to properly create types [2] because it just wants to slam everything out as base strings and integers, and inline all manipulations on the spot (repeatedly, if necessary) rather than define methods... at all, let alone correctly use methods to maintain invariants. (I've seen it make methods on some occasions. I've never seen it correctly define invariants with methods.)
Using tab complete gives me the chance to generate a few lines of a solution, then stop it, correct the architectural mistakes it is making, and then move on.
To AI's credit, once corrected, it is reasonably good at using the correct approach. I would like to be able to prompt the tab completion better, and the IDEs could stand to feed the tab completion code more information from the LSP about available methods and their arguments and such, but that's a transient feature issue rather than a fundamental problem. Which is also a reason I fight the AI on this matter rather than just sitting back: In the end, AI benefits from well-organized code too. They are not infinite, they will never be infinite, and while code optimized for AI and code optimized for humans will probably never quite be the same, they are at least correlated enough that it's still worth fighting the AI tendency to spew code out that spends code quality without investing in it.
[1]: Which is less trivial than it sounds and violated by programmers on a routine basis: https://jerf.org/iri/post/2025/fp_lessons_half_constructed_o...
[2]: https://jerf.org/iri/post/2025/fp_lessons_types_as_assertion...
> is precisely that the style of code it uses in the language I'm using puts out a lot of things I consider errors based on the "middle-of-the-road" code style that it has picked up from all the code it has ingested.
That is a really good point: the output you're gonna get is going to be mediocre, because it was trained (in aggregate) on mediocrity.
So the people who gush about LLMs were probably subpar programmers to start, and the ones that complain probably tend to be better-than-average, because who would be irritated by mediocrity?
And then you have to think about the long-term social effects: the more code the mediocrity machine puts out, the more mediocre code people are exposed to, and the more mediocre habits they'll pick up and normalize. IMHO, a lot of mediocrity comes from "growing up" in an environment with poor to mediocre norms. The next generation of seniors, who have more experience being LLM operators than writing code themselves, and probably more likely to get stuck in mediocrity.
I know someone's going to make an analogy to compilers to dismiss what I'm saying: but the thing about compilers is they are typically written by very talented and experienced people who've spent a lot of time carefully reasoning about how they behave in different scenarios. That's nothing like an LLM (just imagine how bad compilers would be if they were written by a bunch of mediocre developers from an outsourcing body shop, that's an LLM).
This is close to my approach. I love copilot intellisense at GitHub’s entry tier because I can accept/reject on the line level.
I barely ever use AI code gen at the file level.
Other uses I’ve gotten are:
1. It’s a great replacement for search in many cases
2. I have used it to fully generate bash functions and regexes. I think it’s useful here because the languages are dense and esoteric. So most of my time is remembering syntax. I don’t have it generate pipelines of scripts though.
> a lot of things I consider errors based on the "middle-of-the-road" code style that it has picked up from all the code it has ingested. For instance, I use a policy of "if you don't create invalid data, you won't have to deal with invalid data"
Yea, this is something I've also noticed but it never frustrated me to the point where I wanted to write about it. Playing around with Claude, I noticed it has been trained to code very defensively. Null checks everywhere. Data validation everywhere (regardless of whether the input was created by the user, or under the tight control of the developer). "If" tests for things that will never happen. It's kind of a corporate "safe" style you train junior programmers to do in order to keep them from wrecking things too badly, but when you know what you're doing, it's just cruft.
For example, it loves to test all my C++ class member variables for null, even though there is no code path that creates an incomplete class instance, and I throw if construction fails. Yet it still happily whistles along, checking everything for null in every method, unless I correct it.