I was referring to the literature of machine learning and language models specifically, since that’s what the paper is about.
What’s your referencing seems to be more related to symbolic ai / formal logic, and I get that these are related, but it just doesn’t really map neatly onto LLM‘s.
Thank you for clarifying. I think you are arguing that the understanding of what "reasoning" means in most of CS and AI research does not "neatly map" onto LLMs. In that case it's not that there's no consensus; there is, but you don't think that consensus is relevant. Is that correct?
The problem with that is that if we allow ourselves to come up with a new definition of an old concept just because the standard definition doesn't match the latest empirical results, we 'll be creating a very large risk of confirmation bias: every time we want to answer the question "is X doing reasoning?" we'll just change our definition of reasoning to match whatever X is doing. We can't ever hope to get any real answers like that.