Just like there are some easy "tells" with LLM generated English, vibecode has a certain smell to it. Parallel variables that do the same thing is probably the most common one I've seen in the hundreds of thousands of lines of vibecode I've generated and then reviewed (and fixed) by now. That's the philosophical Chinese room thought experiment though. It's a computer. Some sand that we melted into a special shape. Can it "understand"? Leave that for philosophers to decide. There's code, that was generated via LLM and not yacc, fine. Code is code though. If you sit down and read all of the code to understand what each variable, function, and class does, it doesn't matter where the code came from, that is what we call understanding what the code does. Sure, most people are too lazy to actually do that, and again, vibecode has a certain smell to it, but to claim that some because some artificial intelligence generated the code makes it incomprehensible to humans seems unsupported. It's fair to point out that there may not be humans that have bothered to, but that's a different claim. If we simplify the question, if ChatGPT generates the code to generate the Fibonacci sequence, can we, as humans, understand that code? Can we understand it if a human writes that same seven lines of code? As we scale up to more complex code though, at what point does it become incomprehensible to human grade intelligence? If it's all vibecode that isn't being reviewed and is just being thrown into a repo, then sure, no human does understand it. But it's just code. With enough bashing your head against it, even if there are three singleton factory classes doing almost the exact same thing in parallel and they only share state on Wednesdays over an RPC mechanism that shouldn't even work in the first place, but somehow it does, code is still code. There's not arcane hidden whitespace that whispers to the compiler to behave differently because AI generated it. It may be weird and different, but have you tried Erlang? You huff enough of the right kind of glue and you can get anything to make sense. If we go back to the Chinese room thought experiment though. If I, as a human, am able to work on tickets to cause intentional changes to the behavior of the vibecoded program/system that results in desired behavior/changes, at what point does it become actual understand vs merely thinking I understand the code.
Say you start at BigCo and are given access to their million line repo(s) with no docs and are given a ticket to work on. Ugh. You just barely started. But after you've been there for five years, it's obvious to you what the Pequad service does, and you might even know who gave it that name. If the claim is LLMs generate code that's simply incomprehensible by humans, the two counterexamples I have for you are TheDailyWtf.com, and Haskell.
> but to claim that some because some artificial intelligence generated the code makes it incomprehensible to humans seems unsupported
That's not my claim. My claim is that AI-generated code is misleading to people familiar with human-written code. If you've grown up on AI-generated code, I wouldn't expect you to have this problem, much like how chess newbies don't find impossible board states much harder to process than possible ones.