Doesn't matter if it is AI hallucinations or entirely human scientific fraud, the problem is the same, and the solution works fine for both cases.
If you can't validate that your bibliography is full of real articles, you shouldn't get published.
LLMs have just poured gasoline on the fire.