They do not. The fundamental technology behind LLMs does not allow that to be the case. You are hoping that an LLM can do something that it cannot do.
https://arxiv.org/html/2502.16763v2
You are wrong. Especially that we are talking about models with 50T parameters.
Can they do arbitrary computations for arbitrarily long numbers? Nope. But that's not remotely the same statement, and they can trivially call out to tools to do that in those cases.
You do realize that training a neural net to do addition is a beginner level exercise in ML?