There was a paper recently about using LLMs to find contradictions in Wikipedia, i.e. claims on the same page or between pages which appear to be mutually incompatible.
https://arxiv.org/abs/2509.23233
I wonder if something more came out of that.
Either way, I think that generation of article text is the least useful and interesting way to use AI on Wikipedia. It's much better to do things like this paper did.
You can easily do this with normal GPT 5.2 in ChatGPT, just turn on thinking (better if extended) and web search, point a Wikipedia page to the model and tell it to check the claims for errors. I've tried it before and surprisingly it finds errors very often, sometimes small, sometimes medium. The less popular the page you linked is, the more likely it'll have errors.
This works because GPT 5.x actually properly use web search.
That’s super interesting. I had a similar idea about 18 months ago.
I think the biggest opportunity is building a knowledge graph based on Wikipedia and then checking against the graph when new edits are made. Detect any new assertions in the edit, check for conflicts against the graph, and bring up a warning along with a link to all the pages on Wikipedia that the new edit is contradicting. If the new edit is bad, it shows the editor why with citations, and if the new edit is correcting something that Wikipedia currently gets incorrect, then it shows all the other places that also need to be corrected.
https://www.reddit.com/r/LocalLLaMA/comments/1eqohpm/if_some...