I'm curious how this linting step scales with larger wikis. Looking for an inconstency across N files requires N*N comparisons, and that's assuming each file contains a single idea.
Presumably, randomness and only looking at a limited subset will semi-ensure over time that most contradictions will surface. Alternatively, how large do you really expect this kind of thing to be, there is a limit to the amount of facts from Warhammer 40k worth saving in a wiki.
Presumably, randomness and only looking at a limited subset will semi-ensure over time that most contradictions will surface. Alternatively, how large do you really expect this kind of thing to be, there is a limit to the amount of facts from Warhammer 40k worth saving in a wiki.