I don't know what a "booster" is, but if a model can solve original math problems, then it's reasoning.
If you can come up with a way to do math without reasoning, that would be, in a sense, even more interesting than AI.
> If you can come up with a way to do math without reasoning, that would be, in a sense, even more interesting than AI.
Logic is just syntactic manipulation of formulas. By the early 90s logical reasoning was pretty much solved with classical AI (the last building block being constraint logic programming).
My dear sir, the entire universe is made of things that "do math without reasoning!"
It's the default, and if we're lucky we harness pieces of it to discern something we're interested in.
A model solving original math problems may look like human reasoning, but internally the model is choosing the next token based on what it has learned about probability around various patterns and structures. The model knows about correlations between problems, proof techniques and answer structures, and when it "reasons" it's selecting a high probability trajectory through that learned knowledge.
A calculator is different because it is not probabilistic; it executes a fixed procedure. One of these models, when doing math, is more like a learned probabilistic system that understands enough structure around mathematics that some of its high probability trajectories seem like genuine reasoning.
The difference is that when a human reasoner goes to solve a problem, they'll think "this kind of proof usually goes this way" - following an explicit rule enforcement. The model may produce the same output, and may even appear to approach it the same way, but the mechanism is a probabilistic pattern selection rather than explicit rule enforcement.