It's a delicate subject but not an unprecedented one. Automatic detection of already known CSAM images (as opposed to heuristic detection of unknown images) has been around for much longer than AI, and for that service to exist someone has to handle the actual CSAM before it's reduced to a perceptual hash in a database.
Maybe AI-based heuristic detection is more ethically/legally fraught since you'd have to stockpile CSAM to train on, rather than hashing then destroying your copy immediately after obtaining it.
> Maybe AI detection is more ethically fraught since you'd need to keep hold of the CSAM until the next training run,
why?
the damage is already done