I have a theory that vibe coding existed before AI.
I’ve worked with plenty of developers who are happy to slam null checks everywhere to solve NREs with no thought to why the object is null, should it even be null here, etc. There’s just a vibe that the null check works and solves the problem at hand.
I actually think a few folks like this can be valuable around the edges of software but whole systems built like this are a nightmare to work on. IMO AI vibe coding is an accelerant on this style of not knowing why something works but seeing what you want on the screen.
Blindly copying and pasting from StackOverflow until it kinda sorta works is basically vibe coding
AI just automates that
> I actually think a few folks like this can be valuable around the edges of software but whole systems built like this are a nightmare to work on. IMO AI vibe coding is an accelerant on this style of not knowing why something works but seeing what you want on the screen.
I would correct that: it's not an accelerant of "seeing what you want on the screen," it's an accelerant of "seeing something on the screen."
[Hey guys, that's a non-LLM it's not X, it's Y!]
Things like habitual, unthoughtful null-checks are a recipe for subtle data errors that are extremely hard to fix because they only get noticed far away (in time and space) from the actual root cause.
I agree but I'd draw a different comparison. That is vibe coding has accelerated the type of developers who relied on stack overflow to solve all their problems. The kind of dev who doesn't try to solve problems themselves. It has just accelerated this type of working, but is less reliable than before.
If you are in the industry for enough time you certainly crossed with a boss who said that it needs to be fixed in 5 minutes or else even if the problem was not caused by you and the solution clearly needs more than 5 minutes. (The root cause was because someone had only 5 minutes to do something too)
I once had a job that my boss ordered (that's the word he used) me to do the wrong thing. Me and the rest of the team refused except for one guy who did it because he was certain that 9 out 10 people were wrong while he was the only right one) The company spend 2M USD in returns, refunds and compensations in a project that probably didn't cost that. It was just a patch! How he could've possibly know - said the dismissed manager.
(Now he works for oracle, why not right)
I'd call some null-pointer-lint-with-automatic-fixes tools "vibe coding" tbh. I've run across a couple that do a pretty good job of detecting possible nulls and add annotations about it and that's great... but then the fix is "if null, return null", in practice it's frequently applied completely blindly without any regards to correctness.
If you lean on tools like that, you can rapidly degrade your codebase into "everything might be null and might short circuit silently and it can't tell you about when it happens", leaving you with buggy software that is next to impossible to understand or troubleshoot because there aren't "should not be null" hints or stack traces or logs or anything that would help figure out causes.
One of my frustrations with AI, and one of the reasons I've settled into a tab-complete based usage of it for a lot of things, is precisely that the style of code it uses in the language I'm using puts out a lot of things I consider errors based on the "middle-of-the-road" code style that it has picked up from all the code it has ingested. For instance, I use a policy of "if you don't create invalid data, you won't have to deal with invalid data" [1], but I have to fight the AI on that all the time because it is a routine mistake programmers make and it makes the same mistake repeatedly. I have to fight the AI to properly create types [2] because it just wants to slam everything out as base strings and integers, and inline all manipulations on the spot (repeatedly, if necessary) rather than define methods... at all, let alone correctly use methods to maintain invariants. (I've seen it make methods on some occasions. I've never seen it correctly define invariants with methods.)
Using tab complete gives me the chance to generate a few lines of a solution, then stop it, correct the architectural mistakes it is making, and then move on.
To AI's credit, once corrected, it is reasonably good at using the correct approach. I would like to be able to prompt the tab completion better, and the IDEs could stand to feed the tab completion code more information from the LSP about available methods and their arguments and such, but that's a transient feature issue rather than a fundamental problem. Which is also a reason I fight the AI on this matter rather than just sitting back: In the end, AI benefits from well-organized code too. They are not infinite, they will never be infinite, and while code optimized for AI and code optimized for humans will probably never quite be the same, they are at least correlated enough that it's still worth fighting the AI tendency to spew code out that spends code quality without investing in it.
[1]: Which is less trivial than it sounds and violated by programmers on a routine basis: https://jerf.org/iri/post/2025/fp_lessons_half_constructed_o...
[2]: https://jerf.org/iri/post/2025/fp_lessons_types_as_assertion...
My experience with AI coding is mixed.
In some cases I feel like I get better quality at slightly more time than usual. My testing situation in the front end is terribly ugly because of the "test framework can't know React is done rendering" problem but working with Junie I figured out a way to isolate object-based components and run them as real unit test with mocks. I had some unmaintainable Typescript which would explode with gobbledygook error messages that neither Junie or I could understand whenever I changed anything but after two days of talking about it and working on it it was an amazing feeling to see that the type finally made sense to me at Junie at the same time.
In cases where I would have tried one thing I can now try two or three things and keep the one I like the best. I write better comments (I don't do the Claude.md thing but I do write "exemplar" classes that have prescriptive AND descriptive comments and say "take a look at...") and more tests than I would on my on my own for the backend.
Even if you don't want Junie writing a line of code it shines at understanding code bases. If I didn't understand how to use an open source package from reading the docs I've always opened it in the IDE and inspected the code. Now I do the same but ask Junie questions like "How do I do X?" or "How is feature Y implemented?" and often get answers quicker than digging into unfamiliar code manually.
On the other hand it is sometimes "lights on and nobody home", and for a particular patch I am working on now it's tried a few things that just didn't work or had convoluted if-then-else ladders that I hate (even if I told it I didn't like that) but out of all that fighting I got a clear idea of where to put the patch to make it really simple and clean.
But yeah, if you aren't paying attention it can slip something bad past you.
Totally agree. I see it all the time: https://bower.sh/death-by-thousand-existential-checks
"on error resume next" has been the first line of many vba scripts for years
Yeah. There are times when silently swallowing nulls is the proper answer. I've found myself doing it many times in C# to trap events that get triggered during creation. But you should never do so unless you've traced where they're coming from!
"ship fast, break things"
I've written before about my experience at a shop like this. The null check would swallow the exception and do nothing about the failure so things just errored silently. Many high fives and haughty remarks about how smart the team was for doing this were had at the expense of lesser teams that didn't. The whole operation ran on a hackneyed MVP architecture from a Learning Tree class a guy took in 2008 and snippets stolen from StackOverflow and passed around on a USB key. Deviation from this bible was heresy and rebuked with sharp, unprofessional behavior. It was not a good place to work for those who value independent thought.
> AI vibe coding is an accelerant on this style of not knowing why something works but seeing what you want on the screen.
I've been saying this exact thing for years now. It also does the whole CRUD app "copy, paste, find, replace from another part of the application" workflow for building new domains very well. If you can bootstrap a codebase with good architectural practices and tests then Claude Code is a productivity godsend for building business apps.