> The burden of proof should fall on the platform, not the victim. The question is not whether a harmed user can show specific damage. The question is whether the company can show, before rolling a product out to billions of people, that it is not predatory by design.
That's asking every company to prove a negative before rolling out new features.
Could we have a regulatory agency that keeps an eye on dark patterns and deals with them as evidence emerges that something is harmful.
> That's asking every company to prove a negative before rolling out new features.
That’s not as rediculous as it seems. That’s sort of model that drug manufacturers follow. It would also mean that if internally they see troubling behaviour they know they have to stop.
Practically, it would be corporate cover up. And applied earnestly it would make these businesses unviable.