I agree. But also at the root of it is that a problem can't be escalated such that a thinking human that has actionable power can be involved.
The fallacy here is a belief that the filter is perfect. Or really, that any process can be perfect. Even if one could be perfect at a specific moment in time, well time marches on and things change.
I'm all for automation but it has to be recognized that the thing will always break and likely in a way you don't expect. Even in ways you __couldn't__ expect. So you have to design with that failure in mind. A lot of these "Falsehoods Programmers Believe About <X>" could summarized as "Programmers Believe They Can Accurately Predict All Reasonable Situations". I added "reasonable" on purpose. The world is just complex and we can only see a very limited amount. The best way to be accurate is to know that you're biased, even if you can't tell in which way you're biased.
Yeah, I don't really get the idea behind automatic filtration without an edge case basket. Hubris or lack of resources?