What's become clear is we need to bring Section 230 into the modern era. We allow companies to not be treated as publishers for user-generated content as long as they meet certain obligations.
We've unfortunately allowed tech companies to get away with selling us this idea that The Algoirthm is an impartial black box. Everything an algorithm does is the result of a human intervening to change its behavior. As such, I believe we need to treat any kind of recommendation algorithm as if the company is a publisher (in the S230 sense).
Think of it this way: if you get 1000 people to submit stories they wrote and you choose which of them to publish and distribute, how is that any different from you publishing your own opinions?
We've seen signs of different actors influencing opinion through these sites. Russian bot farms are probably overplayed in their perceived influence but they're definitely a thing. But so are individual actors who see an opportunity to make money by posting about politics in another country, as was exposed when Twitter rolled out showing location, a feature I support.
We've also seen this where Twitter accounts have been exposed as being ChatGPT when people have told them to "ignore all previous instructions" and to give a recipe.
But we've also seen this with the Tiktok ban that wasn't a ban. The real problem there was that Tiktok wasn't suppressing content in line with US foreign policy unlike every other platform.
This isn't new. It's been written about extensively, most notably in Manufacturing Consent [1]. Controlling mass media through access journalism (etc) has just been supplemented by AI bots, incentivized bad actors and algorithms that reflect government policy and interests.