logoalt Hacker News

logankeenantoday at 1:43 PM3 repliesview on HN

Regardless of how you feel about content moderation, 48 hours is a ridiculously long time given what AI can do today. That “bad” image could have been propagated around the world to millions of people in that time. It can and should be removed in minutes because AI can evaluate the “bad” image quickly and a human moderator isn’t required anymore. However, the compute costs would eat into profits…

Again, I’m not judging about content moderation, but this is an extremely weak initiative.


Replies

Ukvtoday at 2:34 PM

> It can and should be removed in minutes because AI can evaluate the “bad” image quickly and a human moderator isn’t required anymore.

CSAM can be detected through hashes or a machine-learning image classifier (with some false positives), whereas whether an image was shared nonconsensually seems like it'd often require context that is not in the image itself, possibly contacting the parties involved.

show 3 replies
Manuel_Dtoday at 1:55 PM

The issue is that if you need to achieve 0% false negatives, you're going to get a lot of false positives.

show 1 reply
Ray20today at 2:11 PM

Regardless of how you feel about content moderation, we are talking about a situation where the government is DEMANDING corporations to implement automated, totalitarian surveillance tools. This is the key factor here.

The next step would be for the government to demand direct access to these tools. Then the government would be able to carry out holocausts against any ethnic group, only 10 times more effectively and inevitably than Hitler did.

show 4 replies