logoalt Hacker News

csnovertoday at 2:01 AM0 repliesview on HN

You aren’t the only one who remembers. But in that time it was a self-selecting process. The problem with “the algorithm”, as I see it, is not that it increases the baseline toxicity of your average internet fuckwad (though I do think the algorithm, by seeking to increase engagement, also normalises antisocial behaviour more than a regular internet forum by rewarding it with more exposure, and in a gamified way that causes others to model that antisocial behaviour). Instead, it seems to me that it does two uniquely harmful things.

First, it automatically funnels people into information silos which are increasingly deep and narrow. On the old internet, one could silo themselves only to a limited extent; it would still be necessary to regularly interact with more mainstream people and ideas. Now, the algorithm “helpfully” filters out anything it decides a person would not be interested in—like information which might challenge their world view in any meaningful way. In the past, it was necessary to engage with at least some outside influences, which helped to mediate people’s most extreme beliefs. Today, the algorithm successfully proxies those interactions through alternative sources which do the work of repackaging them in a way that is guaranteed to reinforce, rather than challenge, a person’s unrealistic world view.

Many of these information silos are also built at least in part from disinformation, and many people caught in them would have never been exposed to that disinformation in the absence of the algorithm promoting it to them. In the days of Usenet, a person would have to get a recommendation from another human participant, or they would have to actively seek something out, to be exposed to it. Those natural guardrails are gone. Now, an algorithm programmed to maximise engagement is in charge of deciding what people see every day, and it’s different for every person.

Second, the algorithm pushes content without appropriate shared cultural context into faces of many people who will then misunderstand it. We each exist in separate social contexts with in-jokes, shorthands for communication, etc., but the algorithm doesn’t care about any of that, it only cares for engagement. So you end up with today’s “internet winner” who made some dumb joke that only their friend group would really understand, and it blows up because to an outsider it looks awful. The algorithm amplifies this to the feeds of more people who don’t have an appropriate context, using the engagement metric to prioritise it over other more salient content. Now half the world is expressing outrage over a misunderstanding—one which would probably never have happened if not for the algorithm boosting the message.

Because there is no Planet B, it is impossible to say whether things would be where they are today if everything were the same except without the algorithmic feed. (And, of course, nothing happens in a vacuum; if our society were already working well for most people, there would not be so much toxicity for the algorithm to find and exploit.) Perhaps the current state of the world was an inevitability once every unhinged person could find 10,000 of their closest friends who also believe that pi is exactly 3, and the algorithm only accelerated this process. But the available body of research leads me to conclude, like the OP, that the algorithm is uniquely bad. I would go so far as to suggest it may be a Great Filter level threat due to the way it enables widespread reality-splitting in a geographically dispersed way. (And if not the recommendation algorithm on its own, certainly the one that is combined with an LLM.)