They are very useful to encode stimuli, but stimuli is "not yet" color. When you have an image that is not just a patch of RGB value, a lot of things will influence what color you will compute based on the exact same RGB.
Akiyoshi's color constancy demonstrations are good examples of this. The RGB model (and any three-values "perceptual" model) fails to predict the perceived color here. You are seeing different colors but the RGB values are the same.
https://www.psy.ritsumei.ac.jp/akitaoka/marie-eyecolorconsta...
Here you’re talking about only perception, and not physical color. You could use 100 dimensional spectral colors, or even 1D grayscale values, and still have the same result. So this example doesn’t have any bearing on whether a 3D color space works well for humans or not. Do you have any other examples that suggest a 3D color space isn’t good enough? I still don’t understand what you meant.