Imagine a future news environment where oodles of different models are applied to fact check most stories from most major sources. The markup from each one is aggregated and viewable.
A lot of the results would be predictable partisan takes and add no value. But in a case like this where the whole conversation is public, the inclusion of fabricated quotes would become evident. Certain classes of errors would become lucid.
Ars Technica blames an over reliance on AI tools and that is obviously true. But there is a potential for this epistemic regression to be an early stage of spiral development, before we learn to leverage AI tools routinely to inspect every published assertion. And then use those results to surface false and controversial ones for human attention.
The author of the blog post hypothesised that the fabrication happened as a result of measures blocking LLMs from scraping their blog. If that is the case, adding more LLMs would not in fact accomplish anything at all.
So, the solution to too much AI is... Even more AI! You sound like you would fit just right at a LLM-shop marketing department.