Looking at and evaluating kimi-2/deepseek vs gemini-family (both through vertex ai), it's not clear open sources is always cheaper for the the same quality
and then we have to look at responsiveness, if the two models are qualitatively in the same ballpark, which one runs faster?