By the time I switched to GPT 5 we were already on 5.1, so I can't speak to 5.0. All I can say is that if the answer came down to something like "push the bias in the other direction and hope we land in the right spot"... well, I think they landed somewhere pretty good.
Don't get me wrong, I get a little tired of it ending turns with "if you want me to do X, say the word." But usually X is actually a good or at least reasonable suggestion, so I generally forgive it for that.
To your larger point: I get that a lot of this comes down to choices made about fine tuning and can be easily manipulated. But to me that's fine. I care more about if the resulting model is useful to me than I do about how they got there.