logoalt Hacker News

Aurornisyesterday at 11:14 PM6 repliesview on HN

So if HN added anything personalized, like allowing you to show fewer stories on topics you dislike, it would lose protection? I can't get on board with that.

I also think it would be extremely unpopular. People like their recommendation engines. They want Netflix to show them more similar shows. They want Reddit to help them find more similar subreddits. I know there are HN users who don't want any of these recommendation engines, but on the whole people actually want them.


Replies

slgyesterday at 11:42 PM

>People like their recommendation engines.

People liked cigarettes too.

>They want Netflix to show them more similar shows.

Perhaps that example was a little too revealing on your end. Netflix doesn't have/need Section 230 protections and they're doing fine.

I'm not suggesting these algorithms should be illegal, just that Section 230 protections were defined too broadly because they predated the feasibility of these type of algorithms. These platforms would be free to continue algorithmic promotion, but I believe these algorithms would be less harmful if the platforms had to worry about potential legal liability.

Think YouTube and copyright for comparison. The DMCA is far from perfect, but we have YouTube as an example of a platform that survived and even thrived in the transition from a world that didn't care about copyrighted internet video to one in which they that needed to moderate with copyright in mind.

show 1 reply
sethammonstoday at 2:56 PM

for me, the distinction is control. If I'm filtering out things I don't like, I'm in control. If the system is filtering out items or promoting items, I think it fair it take on more responsibility.

A system doesn't want your feed empty because they want your eyes, but because money. When they choose what goes into the feed, they should gain increased liability for what comes out. The risk they take on for more money. If that money is not worth it, don't recommend.

I enjoyed the internet in the beforetimes. Recommendations were limited to "this is objectively related, this is new, this is upvoted, this is by someone you follow or someone they follow, or this is randomly chosen." I still feel there is some liability there, but it is less than when it changes to "this is something we have determined we should show you based on your personal past behavior." That feels different than liking a category when the meta-categories are picked for you. Especially when those meta-categories allow for things you would not want to opt in to, like doomscroll material.

I like some of the stuff I get algorithmically. I never would have searched for a soul cover of Slim Shady, but I'm glad I found it. And I'm glad I found knot tying videos. I think there is space for fancy feeds. But I think it should come closer to being a publisher. This _will_ depress throughput creation if things all have to be monitored which will change the economies and maybe that means some businesses can't exist as they do today. I'd likely pay a subscription to a LearnTok that had curated, quality material.

levkkyesterday at 11:39 PM

I'm paying for Netflix to do that as a feature. Instagram uses that to drive engagement to sell ads. Disabling personalized content on Netflix is a revenue-neutral choice. On Instagram, that would mean their ad revenue takes a huge dive. Apples aren't oranges.

show 1 reply
watwuttoday at 10:05 AM

1.) I do not know anyone who would particularly like netflix recommendation algorithm.

2.) Netflix algorithm is not relevant to "Section 230 protections", because it does not contain any data from third parties. All of that is Netflix content.

skydhashyesterday at 11:40 PM

That is not comparable because of the little you have over the algorithm for the other cases. On bandcamp, you can select the genre and a sorting criteria and have very good control over the list. But on Spotify, it’s very obscure, with things you’ve never asked for being in front even before your own library.

intendedtoday at 5:00 AM

I can get on board with it for sure.

Theres a paper that studied the spread of misinformation online, back before COVID - they found that messages cascaded through more science and research oriented networks differently than they cascaded through conspiracy communities.

Popularity is not a sign of Signal. It’s a sign of being able to scratch the limbic system and zeitgeist at the same time.

For a site like HN, popularity isn’t a good predictive signal.