I can't agree with a 'companies won't be evil because they will lose business if people don't like their evilness!' argument.
Certainly, going through life not trusting any company isn't a fun way to live. Going through life not trusting in general, isn't a fun way to live.
Would you like to see my inbox?
We as tech people made this reality through believing in an invisible hand of morality that would be stronger than power, stronger than the profit motives available through intentionally harming strangers a little bit (or a lot) at scale, over the internet, often in an automated way, if there was a chance we'd benefit from it.
We're going to have to be the people thinking of what we collectively do in this world we've invented and are continuing to invent, because the societal arbitrage vectors aren't getting less numerous. Hell, we're inventing machines to proliferate them, at scale.
I strongly encourage you to abandon this idea that the world we've created, is optimal, and this idea that companies of all things will behave ethically because they perceive they'll lose business if they are evil.
I think they are fully correct in perceiving the exact opposite and it's on us to change conditions underneath them.
My argument here is not that companies will lose customers if they are unethical.
My argument is that they will lose paying customers if they act against those customer's interests in a way that directly violates a promise they made when convincing their customers to sign up to pay them money.
"Don't train on my data" isn't some obscure concern. If you talk to large companies about AI it comes up in almost every conversation.
My argument here is that companies are cold hearted entities that act in their self interest.
Honestly, I swear the hardest problem in computer science in 2025 is convincing people that you won't train on your data when you say "we won't train on your data".
I wrote about this back in 2023, and nothing has changed: https://simonwillison.net/2023/Dec/14/ai-trust-crisis/