Subjecting every real contributor to the "AI guardian" would be unfair, and shadow banning is ineffective when you're dealing with a large number of drive-by nuisances rather than a small number of dedicated trolls. Public humiliation is actually a great solution here.
How effective is it against people who just simply does not care?
You could easily guard against bullshit issues. So you can focus on what matters. If the issue is legit goes ahead to a human reviewer. If is run of the mill ai low quality or irrelevant issue, just close. Or even nicer: let the person that opened the issue to "argue" with the ai to further explain that is legit issue for false positives.
> Subjecting every real contributor to the "AI guardian" would be unfair
Had my first experience with an "AI guardian" when I submitted a PR to fix a niche issue with a library. It ended up suggesting that I do things a different way which would have to involve setting a field on a struct before the struct existed (which is why I didn't do that in the first place!)
Definitely soured me on the library itself and also submitting PRs on github.