Aside from just being immensely satisfying, these patterns of defensive programming may be a big part of how we get to quality GAI-written code at scale. Clippy (and the Rust compiler proper) can provide so much of the concrete iterative feedback an agent needs to stay on track and not gloss over mistakes.
Indexing into arrays and vectors is really wise to avoid.
The same day Cloudflare had its unwrap fiasco, I found a bug in my code because of a slice that in certain cases went past the end of a vector. Switched it to use iterators and will definitely be more careful with slices and array indexes in the future.
Wow that’s amazing. The partial equality implementation is really surprising.
One question about avoiding boolean parameters, I’ve just been using structs wrapping bools. But you can’t treat them like bools… you have to index into them like wrapper.0.
Is there a way to treat the enum style replacement for bools like normal bools, or is just done with matches! Or match statements?
It’s probably not too important but if we could treat them like normal bools it’d feel nicer.
The very useful TryFrom trait landed only in 1.34, so hopefully the code using unwrap_or_else() in From impl predates that...
Actually the From trait documentation is now extremely clear about when to implement it (https://doc.rust-lang.org/std/convert/trait.From.html#when-t...)
In Pattern: Defensively Handle Constructors, it recommends using a nested inner module with a private Seal type. But the Seal type is unnecessary. The nested inner module is all you need, a private field `_private: ()` is only settable from within that inner module, so you don't need the extra private type.
This made me wonder, why aren't there usually teams whose job is to keep an eye on the coding patterns used in the various codebases? Similarly like how you have an SOC team who keeps monitoring traffic patterns, or an Operations Support team who keeps monitoring health probes, KPIs, and logs, or a QA who keeps writing tests against new code, maybe there would be value to keeping track of what coding patterns develop into over the course of the lifetime of codebases?
Like whenever I read posts like this, they're always fairly anecdotal. Sometimes there will even be posts about how large refactor x unlocked new capability y. But the rationale always reads somewhat retconned (or again, anecdotal*). It seems to me that maybe such continuous meta-analysis of one's own codebases would have great potential utility?
I'd imagine automated code smell checking tools can only cover so much at least.
* I hammer on about anecdotes, but I do recognize that sentiment matters. For example, if you're planning work, if something just sounds like a lot of work, that's already going to be impactful, even if that judgement is incorrect (since that misjudgment may never come to light).
Nice article. The problem of multiple booleans is just one instance of a more general problem: when a function takes multiple arguments of the same type (i32, String, etc.). The newtype pattern allows you to create distinct types in such cases and enforce correctness at compile time.
Loved the article, such a nice read. I am still slowly ramping up my proficiency in Rust and this gave me a lot of things to think through. I particularly enjoyed the temporary mutability pattern, very cool and didn't think about it before!
The tech industry is full of brash but lightly-seasoned people resurrecting discredited ideas for contrarianism cred and making the rest of us put down monsters we thought we'd slain a long time ago.
"Defensive programming" has multiple meanings. To the extent it means "avoid using _ as a catch-all pattern so that the compiler nags you if someone adds an enum arm you need to care about", "defensive" programming is good.
That said, I wouldn't use the word "defensive" to describe it. The term lacks precision. The above good practice ends up getting mixed up with the bad "defensive" practices of converting contract violations to runtime errors or just ignoring them entirely --- the infamous pattern in Java codebases of scrawling the following like of graffiti all over the clean lines of your codebase:
if (someArgument == null) {
throw new NullPointerException("someArgument cannot be null");
}
That's just noise. If someArgument can't be null, let the program crash.Needed file not found? Just return ""; instead.
Negative number where input must be contractually not negative? Clamp to zero.
Program crashing because a method doesn't exist? if not: hasattr(self, "blah") return None
People use the term "defensive" to refer to code like the above. They programs that "defend" against crashes by misbehaving. These programs end up being flakier and harder to debug than programs that are "defensive" in that they continually validate their assumptions and crash if they detect a situation that should be impossible.
The term "defensive programming" has been buzzing around social media the past few weeks and it's essential that we be precise that
1) constraint verification (preferably at compile time) is good; and
2) avoidance of crashes at runtime at all costs after an error has occurred is harmful.
I'm not reading a solid argument as to not use "..Defaults()" because doing so suggests that you may introduce a bug and therefore should be explicit about EVERYTHING instead? Ugh. Hard disagree.
This was posted with a (mostly) healthy discussion on lobste.rs, here's the link https://lobste.rs/s/ouy4dq/patterns_for_defensive_programmin...
This is one of the best Rust articles I've ever read. It's obviously from experience and covers a lot of _business logic_ foot guns that Rust doesn't typically protect you against without a little bit of careful coding that allows the compiler to help you.
So many rust articles are focused on people doing dark sorcery with "unsafe", and this is just normal every day api design, which is far more practical for most people.
Good article, but one (very minor) nit I have is with the PizzaOrder example.
The problem they want to address is partial equality when you want to compare orders but ignoring the ordered_at timestamp. To me, the problem is throwing too many unrelated concerns into one struct. Ideally instead of using destructuring to compare only the specific fields you care about, you'd decompose this into two structs: I get that this is a toy example meant to illustrate the point; there are certainly more complex cases where there's no clean boundary to split your struct across. But this should be the first tool you reach for.