It seems you find LoC as a measure of productivity. This would answer your question as to why the author does not find it makes them more productive. If total output increases, but quality decreases (which in terms of code means more bugs) then has productivity increased or has it stayed the same?
To answer my own question, if you can pump out features faster but turn around and spend more time on bugs than you do previously then your productivity is likely net neutral.
There is a reason LoC as a measure of productivity has been shunned from the industry for many, many years.
LoC is a terrible metric for comparing productivity of different developers, even before you get to Goodhart's Law.
OTOH, for a given developer to implement a given feature in a given system, at the end of the day, some amount of code has to be written.
If a particular developer finds that AI lets him write code comparable to what he would have written, in lieu of the code he would have written, but faster than he can do it alone, then looking at lines written might actually be meaningful, just in that context.
I didn't mean to imply LoC as a measurement of productivity. What I really mean is more "amount of useful code produced to a level the human-using-the-llm determines to be useful".
To try and give an example, say that you want to make a module that transforms some data and you ask the LLM to do it. It generates a module with tons of single-layer if-else branches with a huge LoC. Maybe one human dev looks at it and says, "great this solves my problem and the LoC and verbosity isn't an issue even though it is ugly". Maybe the second looks at it and says, "there's definitely some abstraction I can find to make this easier to understand and build on top of."
Depending on the scenario and context, either of them could be correct.