logoalt Hacker News

Ukvtoday at 2:34 PM3 repliesview on HN

> It can and should be removed in minutes because AI can evaluate the “bad” image quickly and a human moderator isn’t required anymore.

CSAM can be detected through hashes or a machine-learning image classifier (with some false positives), whereas whether an image was shared nonconsensually seems like it'd often require context that is not in the image itself, possibly contacting the parties involved.


Replies

voidUpdatetoday at 2:42 PM

I would not want to be the supervisor that has to review any CSAM positives to check for false ones

pjc50today at 2:37 PM

Indeed. It seems that the process being described is some kind of one-stop portal, operated by or for OFCOM or the police, where someone can attest "this is a nonconsensual intimate image of me" (hopefully in some legally binding way!), triggering a cross-system takedown. Not all that dissimilar to DMCA.

thaumasiotestoday at 2:37 PM

> CSAM can be detected through hashes or a machine-learning image classifier (with some false positives), whereas

Everything can be detected "with some false positives". If you're happy with "with some false positives", why would you need any context?

show 1 reply