With Chain of Thoughts (text thinking), the models can already use as much compute as they want in any language (determined by reinforcement learning training)
I'm not convinced that thinking tokens - which sort of have to serve a specific chain of thought purpose - are interchangeable with input tokens during which give the model compute without having it add new text.
For a very imperfect human analogy, it feels like saying "a student can spend as much time thinking about the text as they want, so the textbook can be extremely terse".
Definitely just gut feelings though - not well tested or anything. I could be wrong.
I'm not convinced that thinking tokens - which sort of have to serve a specific chain of thought purpose - are interchangeable with input tokens during which give the model compute without having it add new text.
For a very imperfect human analogy, it feels like saying "a student can spend as much time thinking about the text as they want, so the textbook can be extremely terse".
Definitely just gut feelings though - not well tested or anything. I could be wrong.