This is getting a ton of hate here, but I think it feels like a pretty reasonably balanced response to competing concerns: protecting literally billions of non-tech-savvy users from potentially malicious social-engineering attacks while allowing devs and tech-savvy a path to bypass that protection if they’re sure they want to.
What concrete change to the policy would be a strict Pareto improvement keeping just those two concerns in mind?
I'm pretty surprised at the amount of hate here. All the "just build it ourselves!" and "Google wants your data", and almost no top-level comments even discussing the difficulty of dealing with malware and social engineering.
There are at least three moral arguments that can be made:
- Google, as a capitalist company, is ignoring the privacy and FOSS implications, and is guilty of screwing the customer due to greed
- Regular, non-tech folks are constantly being robbed of their privacy, money, and/or identity through malware and social engineering attacks, and Google is guilty of not doing enough to protect them
- Enabling malware delivery and use props up criminals and known bad actors (e.g., north korean), and by not stopping this Google is guilty of supporting these bad actors
I'm not seeing either of those last two points being made strongly. Maybe it's just not the target audience — people here aren't as likely to be scammed, and few of us are regularly thinking about north korea — but I'd expect to see more consideration for the costs of inaction here.