I had that thought too; the author claims tor suffers from user free bugs but like, really? A 20 year old code base with a massive supportive following suffers from basic memory issues that can often be caught with linters or dynamic analysis? Surely there are development issues I'm not aware of but I wasn't really sold by the article
If a codebase is being maintained and extended, it's not all code with 20 years of testing.
Every change you make could be violating a some assumption made elsewhere, maybe even 20 years ago, and subtly break code at distance. C's type system doesn't carry much information, and is hostile to static analysis, which makes changes in large codebases difficult, laborious, and risky.
Rust is a linter and static analyzer cranked up to maximum. The whole language has been designed around having necessary information for static analysis easily available and reliable. Rust is built around disallowing or containing coding patterns that create dead-ends for static analysis. C never had this focus, so even trivial checks devolve into whole-program analysis and quickly hit undecidability (e.g. in Rust whenever you have &mut reference, you know for sure that it's valid, non-null, initialized, and that no other code anywhere can mutate it at the same time, and no other thread will even look at it. In C when you have a pointer to an object, eh, good luck!)
That can easily happen in C programs. Some edge case doesn't really happen, unless you specifically craft inputs for that, or even sequences of inputs, and simply no one was aware for those 20 years, or no one tried to exploit that specific part of the code. With C you are never safe, unless you somehow proved the code to be free of such bugs.