I wish these articles acknowledged that densely packed structures like that have significant overhead in terms of the instructions which must be generated to parse them. If that shit gets inlined all over the place, how much bigger is the binary now? Absolute minimalism is rarely the right choice, the size of .text matters too.
That would probably warrant a followup article. I did find myself wondering where the tipping point is between using a slightly less efficient storage method vs. computational overhead.
For example, you technically don't need to track castling availability. If you're storing the entire match as a set of positions, you can deduct that by replaying the previous positions. A quick search seems to indicate that an average chess match runs for about 40 moves, so replaying all previous positions isn't that bad, on average.
If you need to store millions of chess matches, being able to store them in ~1kb each might be more important, compared the overhead of unpacking each state. If you need to query for certain positions across all those matches, maybe less "compression" is desired.
I always enjoy articles about how people store data and how they think of capturing states, but I also like to know the context and how that data is use or queried.