Definitely. But AI can also generate unit tests.
You have to be careful by exactly telling the LLM what to test for and manually check the whole suite of tests. But overall it makese feel way more confident over increasing amounts of generated code. This of course decreases the productivity gains but is necessary in my opinion.
And linters help.
I've been using Claude sonnet 4.5 lately and I've noticed a tendency for it to create tests that prove themselves. Rather than calling the function we're hoping to test, it re-implements the code in the test and then tests it there. It's helpful, and it usually works very well if you have well defined inputs and outputs, I much prefer it over writing tests manually, but you have to be very careful.
It doesn't generate good tests by default though.
I worked on a team where we had someone come in and help us improve our tests a lot.
The default LLM generated tests are bit like the ones I wrote before that experience.