> It's definitely not something you can plug into a three-value model.
What do you mean? And what is screwed up? We use 3 dimensions because most of us are trichromats, and because (un-coincidentally) most digital display devices have 3 primaries. The three-value models definitely are sufficient for many color tasks & goals. Three-value models work so well that outside of science and graphics research it’s hard to find good reasons to need more, especially for art & design work. It’d be more interesting to identify cases where a 3d color model or color space doesn’t work… what cases are you thinking of? 3D cone response is neither physical (spectral) color nor perceptual (“brain”) color, and it lands much closer to the physically-based side of things, but completely physically justifies using 3D models without needing to understand the brain or perception, does it not?
I had experimented with some photo printing services and came across one professional level service that offered pigment inkjet printing (vs much more common dye inkjet printing). Their printers had 12 colors of ink vs the traditional 4. I did some test photos and visually they looked stunning.
They are very useful to encode stimuli, but stimuli is "not yet" color. When you have an image that is not just a patch of RGB value, a lot of things will influence what color you will compute based on the exact same RGB.
Akiyoshi's color constancy demonstrations are good examples of this. The RGB model (and any three-values "perceptual" model) fails to predict the perceived color here. You are seeing different colors but the RGB values are the same.
https://www.psy.ritsumei.ac.jp/akitaoka/marie-eyecolorconsta...