Undoubtedly each new model from OpenAi has numerous training and orchestration improvements etc.
But how much of each product they release also just a factor of how much they are willing to spend on inference per query in order to stay competitive?
I always wonder how much is technical change vs turning a knob up and down on hardware and power consumption.
GTP5.0 for example seemed like a lot of changes more for OpenAI's internal benefit (terser responses, dynamic 'auto' mode to scale down thinking when not required etc.)
Wondering if GPT5.2 is also case of them in 'code red mode' just turning what they already have up to 11 as a fastest way to respond to fiercer competion.
I always liked the definition of technology as "doing more with less". 100 oxen replaced by 1 gallon of diesel, etc.
That it costs more does suggest it's "doing more with more", at least.