I'm an LLMs are being used in workflows they don't make sense in-sayer. And while yes, I can believe that LLMs can be part of a system that actually does think, I believe that to achieve true "thinking", it would likely be a system that is more deterministic in its approach rather than probabilistic.
Especially when modeling acting with intent. The ability to measure against past results and think of new innovative approaches seems like it may come from a system that may model first and then use LLM output. Basically something that has a foundation of tools rather than an LLM using MCP. Perhaps using LLMs to generate a response that humans like to read, but not in them coming up with the answer.
Either way, yes, its possible for a thinking system to use LLMs (and potentially humans piece together sentences in a similar way), but its also possible LLMs will be cast aside and a new approach will be used to create an AGI.
So for me: even if you are an AI-yeasayer, you can still believe that they won't be a component in an AGI.
You can make a separate model for the task, which is based on well chosen features and calibrated from actual data. Then the LLM only needs to generate the arguments to this model (extract those features from messages) and call it like a MCP tool. This external tool can be a simple Sklearn model.