> those that lead to undefined behavior but aren't security-critical.
Once again C++ people imagining into existence Undefined Behaviour which isn't Security Critical as if somehow that's a thing.
Mostly I read the link because I was intrigued as to how this counted as "at scale" and it turns out that's misleading, the article's main body is about the (at scale) deployment at Google, not the actual hardening work itself which wasn't in some special way "at scale".
> Undefined Behaviour which isn't Security Critical as if somehow that's a thing
Undefined behavior in the (poorly written) spec doesn't mean undefined behavior in the real world. A given compiler is perfectly free to specify the behavior.
Of course there is undefined behavior that isn't security critical. Hell, most bugs aren't security critical. In fact, most software isn't security critical, at all. If you are writing software which is security critical, then I can understand this confusion; but you have to remember that most people don't.
The author of TFA actually makes another related assumption:
> A crash from a detected memory-safety bug is not a new failure. It is the early, safe, and high-fidelity detection of a failure that was already present and silently undermining the system.
Not at all? Most memory-safety issues will never even show up in the radar, while with "Hardening" you've converted all of them into crashes that for sure will, annoying customers. Surely there must be a middle ground, which leads us back to the "debug mode" that the article is failing to criticize.