Why would you think that? Every distribution, every view is adding damage, even if the original victim doesn't know (or even would rather not know) about it.
I don't think AI training on a dataset counts as a view in this context. The concern is predators getting off on what they've done, not developing tools to stop them.
I don't think AI training on a dataset counts as a view in this context. The concern is predators getting off on what they've done, not developing tools to stop them.