>> There’s no consensus in the literature on what these mean even if you make it more specific by talking about “mathematical reasoning”, so I don’t really understand what opinions like these are based on.
What literature is that? You can find plenty of very clear consensus on what reasoning is if you read e.g. the literature on automated reasoning. A brief taste:
Automated Reasoning
Reasoning is the ability to make inferences, and automated reasoning is concerned with the building of computing systems that automate this process. Although the overall goal is to mechanize different forms of reasoning, the term has largely been identified with valid deductive reasoning as practiced in mathematics and formal logic. In this respect, automated reasoning is akin to mechanical theorem proving. Building an automated reasoning program means providing an algorithmic description to a formal calculus so that it can be implemented on a computer to prove theorems of the calculus in an efficient manner. Important aspects of this exercise involve defining the class of problems the program will be required to solve, deciding what language will be used by the program to represent the information given to it as well as new information inferred by the program, specifying the mechanism that the program will use to conduct deductive inferences, and figuring out how to perform all these computations efficiently. While basic research work continues in order to provide the necessary theoretical framework, the field has reached a point where automated reasoning programs are being used by researchers to attack open questions in mathematics and logic, provide important applications in computing science, solve problems in engineering, and find novel approaches to questions in exact philosophy.
https://plato.stanford.edu/entries/reasoning-automated/
After that you may want to look at the SEP articles on Analogical reasoning and Defeasible Reasoning:
That's an obsolete definition that definea reasoning as a simplistic mechanical task explicitly encoded by humans. What LLM is attempting is far beyond that. It's a automated process for creating its own reasoning method.
I was referring to the literature of machine learning and language models specifically, since that’s what the paper is about.
What’s your referencing seems to be more related to symbolic ai / formal logic, and I get that these are related, but it just doesn’t really map neatly onto LLM‘s.