logoalt Hacker News

dang10/01/20242 repliesview on HN

intended's reply is correct.

Most of the bot activity we know about on HN has to do with voting rings and things like that, people trying to promote their commercial content. To the extent that they post things, it's mostly low-quality stuff that either gets killed by software, flagged by users, or eventually reported to us.

When it comes to political, ideological, nationalistic arguments and the like, that's where we see little (if any) evidence. Those are the areas where users are most likely to accuse each other of not being human, or posting in bad faith, etc., so that's what I've written about in the posts that I linked to.

There's still always the possibility that some bad actors are running campaigns too sophisitcated for us to detect and crack down on. I call this the Sufficiently Smart Manipulator problem and you can find past takes on it here: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

I can't say whether or not this exists (that follows by definition—"sufficiently" means smart enough to evade detection). All I can tell you is that in specific cases people ask us to look into, there are usually obvious reasons not to believe this interpretation. For example would a sufficiently smart manipulator be smart enough to have been posting about Julia macros back in 2017, or the equivalent? You can always make a case for "yes" but those cases end up having to stretch pretty thin.


Replies

JohnMakin10/02/2024

Dang, I agree with and appreciate your moderation approach here and I completely agree with most of what you said. IME the last 18 months or so I’ve been here, this has been a welcome bastion against typical bot/campaign activity. Nowhere in the web seems safe the last ~dozen years. Most of what I’ve written here applies to my research of foreign bot activity on social networks, particularly in election years, in which you can far more easily piece together associations between accounts and narratives and writing style and piece together a lot more dots than on a site like this - and conclude very definitively that yes, this is a bot.

My original comment was just meant to chime in that, in the wild the last ten years, I’ve encountered an extraordinary amount of this kind of activity (which I confirmed - I really do research this stuff on the side and have written quite a lot about it) - that would support credibility to anyone that felt they experienced bot activity on this site. I haven’t done a full test on this site yet, because I don’t think it’s allowed, but at a glance I suspect particular topics and keywords attract swarms of voting/downvoting stuff, which you alluded to in your post. I think the threshold of 500 upvotes to downvote is a bit low, but clearly to me what you are doing is working. I’m only writing all of this out to make it very clear I am not making any criticisms or commentary about this site and how it handles bots/smurfs/etc.

Most of my research centers around 2016,2020 political cycles. Since the invention, release, and mass distribution of LLM’s I personally think this stuff has proliferated far beyond what anyone can imagine right now, and renders most of my old methods worthless, but for now that’s just a hypothesis.

Again, I appreciate the moderation of this site, it’s one of the few places left I can converse with reasonably intelligent and curious people compared to the rest of the web. Whatever you are doing, please keep doing it.

show 1 reply
imiric10/01/2024

Thank you. I appreciate your positive outlook on these things. It helps counteract my negative one. :)

For example, when you say "The answer to the Sufficiently Smart Manipulator is the Sufficiently Healthy Community", that sounds reasonable, but I see a few issues with it.

1. These individuals are undetectable by definition. They can infiltrate communities and direct conversations and opinions without raising any alarms. Sometimes these are long-term operations that take years, and involve building trust and relationships. For all intents and purposes, they may seem like just another member of the community, which they partly are. But they have an agenda that masquerades as strong opinions, and are protected by tolerance and inclusivity, i.e. the paradox of tolerance.

2. Because they're difficult to detect, they can easily overrun the community. What happens when they're a substantial percentage of it? The line between fact and fiction becomes blurry, and it's not possible to counteract bad arguments with better ones, simply because they become a matter of opinion. Ultimately those who shout harder, in larger numbers, and are in a better position to, get heard the most.

These are not some conspiracy theories. Psyops and propaganda are very real and happen all around us in ways we often can't detect. We can only see the effects like increased polarization and confusion, but are not able to trace these back to the source.

Moreover, with the recent advent of AI, how long until these operations are fully autonomous? What if they already are? Bots can be deployed by the thousands, and their capabilities improve every day.

So I'm not sure that a Sufficiently Healthy Community alone has a chance of counteracting this. I don't have the answer either, but can't help but see this trend in most online communities. Can we do a better job at detection? What does that even look like?

show 1 reply