If the web of trust only extends to the people who I actually know to be real, then that works -- but it's a very small web.
And by small, I mean: This whole trusted group could fit into one quiet discord channel. This doesn't seem to be big enough to be useful.
However,if it extends beyond that, then things get dicier: Suppose Bill trusts me, as well as those that I myself trust. Bill does this in order to make his web-of-trust something big enough to be useful.
Now, suppose I start trusting bots -- maybe incidentally, or maybe maliciously. However I do that, this means that Bill now has bots in his web of trust as well.
And remember: The whole premise here is that bots can be indistinguishable from people, so Bill has no idea that this has happened and that I have infected his web with bots.
---
It all seems kind of self-defeating, to me. The web is either too small to be useful, or it includes bots.
Critically, it doesn't have to be binary trusted/untrusted, and it doesn't have to be statically determined. If Bill vouched for you yesterday and today you are trusting a bunch of discovered bots, that would down weight the amount of trust the network has in Bill a lot more than if he vouched for you did months ago.
The question is whether we can arrive at a set of rules and heuristics and applications of the system that sufficiently incentivizes being a trustworthy member of the network.