I think Martin isn't wrong here, but I've first hand seen AI produce "lazy" code, where the answer was actually more code.
A concrete example, I had a set of python models that defined a database schema for a given set of logical concepts.
I added a new logical concept to the system, very analogous to the existing logical set. Claude decided that it should just re-use the existing model set, which worked in theory, but caused the consumers to have to do all sorts of gymnastics to do type inference at runtime. It "worked", but it was definitely the wrong layer of abstraction.
Is more code really bad? For humans, yes we want thing abstracted, but sometimes it may make more sense to actually repeat yourself. If a machine is writing and maintaining the code, do we need that extra layer now?
In the olden days we used Duff's devices and manually unrolled loops with duplicated code that we wrote ourselves.
Now, the compiler is "smart" enough to understand your intent and actually generates repeated assembly code that is duplicated. You don't care that it's duplicated because the compiler is doing it for you.
I've had some projects recently where I was using an LLM where I needed a few snippets of non-trivial computational geometry. In the old days, I'd have to go search for a library and get permission from compliance to import the library and then I'd have to convert my domain representations of stuff into the formats that library needed. All of that would have been cheaper than me writing the code myself, but it was non-trivial.
Now the LLM can write for me only the stuff I need (no extra big library to import) and it will use the data in the format I stored it in (no needing to translate data structures). The canon says the "right" way to do it would be to have a geometry library to prevent repeated code, but here I have a self contained function that "just works".