The issue I see is that the high test coverage created by having LLMs write tests results in almost all non-trivial changes breaking tests, even if they don't change behavior in ways that are visible from the outside. In one project I work, we require 100% test coverage, so people just have LLMs write tons of tests, and now every change I make to the code base always breaks tests.
So now people just ignore broken tests.
> Claude, please implement this feature.
> Claude, please fix the tests.
The only thing we've gained from this is that we can brag about test coverage.
Unit tests vs acceptance tests. You shouldn't be afraid to throw away unit tests if the implementation changes, and acceptance tests should verify behavior at API boundaries, ignoring implementation details.
I feel it end up a massive drag on development velocity and makes refactoring to simpler designs incredibly painful.
But hey, we're just supposed to let the AIs run wild and rewrite everything every change so maybe that's a heretic view.
My best unit tests are 3 lines, one of them whitespace, and they assert one single thing that's in the requirements.
These are the only tests I've witnessed people delete outright when the requirements change. Anything more complex than this, they'll worry that there's some secondary assertion being implied by a test so they can't just delete it.
Which, really is just experience telling them that the code smells they see in the tests are actually part of the test.
meanwhile:
is demonstrably a dead test when the story is, "allow users to have multiple shipping addresses", as is a test that makes sure balances can't go negative when we decide to allow a 5 day grace period on account balances. But if it's just one of six asserts in the same massive tests, then people get nervous and start losing time.