LLMs are designed to carry out "associative reasoning" which captures logic based on recognition and recall of compositional patterns learned during training.
Having said that, we can still get semantically and logically idempotent output that makes sense but with lots of work outside of the LLM, which contrasts with the current hyper focus on the LLM itself as the be all and end all. It is just one component in what ought to be a larger and more involved system for reasoning.
Look at what we were able to accomplish here for Legal AI, not so mathematical logic per se but mimicking (capturing) axiomatic logic in the legal domain:
https://www.youtube.com/watch?v=_9Galw9-Z3Q
marc at sunami dot ai