An unprocessed photo does not “look”. It is RGGB pixel values that far exceed any display media in dynamic range. Fitting it into the tiny dynamic range of screens by thrusting throwing away data strategically (inventing perceptual the neutral grey point, etc.) is what actually makes sense of them, and what is the creative task.
In the article adjusting for the range makes quite a small difference compared to the other steps.
Right. the statement "Here’s a photo of a Christmas tree, as my camera’s sensor sees it" is incoherent.
Yeah this what I immediately think too any time I see an article like this. Adjustments like contrast and saturation are plausible to show before/after, but before any sort of tone curve makes no sense unless you have some magic extreme HDR linear display technology (we don't). Putting linear data into 0-255 pixels which are interpreted as SRGB makes no sense whatsoever. You are basically viewing junk. It's not like that's what the camera actually "sees". The camera sees a similar scene to what we see with our eyes, although it natively stores and interprets it differently to how our brain does (i.e. linear vs perceptual).