I suppose that my generalization was too broad and that LLMs can be either good or bad at writing tests depending on your workflow and expectations.
I'm closely supervising the LLM, giving it fine-grained instructions — I generally understand the full interface design and most times the whole implementation (though sometimes I skim). When I have the LLM write unit tests for me, it writes essentially what I would have written a couple years ago, except that it tends to be more thorough and add a few more tests I wouldn't have had the patience to write. That saves me quite a bit of time, and the LLM-generated unit tests are probably somewhat better than what I would have written myself.
I won't say that I never see brain-dead mistakes of the "5-vertex square" variety (haha) — by their nature, LLMs tend towards consistency rather than understanding after all. But I've been using Claude Opus exclusively for while and it doesn't tend to make those mistakes nearly as often as I used to see with lower-powered LLMs.