I've always sort of assumed the models were just making sympy scripts behind the scenes.
sometimes you can see them do this and sometimes you can see they just work through the problem in the reasoning tokens without invoking python.
Wheres Godel when you need him. A lot of this stuff is symbol shunting, which LLMs should be really good at.
sometimes you can see them do this and sometimes you can see they just work through the problem in the reasoning tokens without invoking python.