I've recently come to the opposite conclusion. I’ve started to feel in the last couple of weeks that we’ve hit an inflection point with these LLM-based models that can reason. Things seem different. It’s like we can feel the takeoff. My mind has changed. Up until last week, I believed that superhuman AI would require explicit symbolic knowledge, but as I work with these “thinking” models like Gemini 2.0 Flash Thinking, I see that they can break problems down and work step-by-step.
We still have a long way to go. AI will need (possibly simulated) bodies to fully understand our experience, and we need to train them starting with simple concepts just like we do with children, but we may not need any big conceptual breakthroughs to get there. I’m not worried about the AI takeover—they don’t have a sense of self that must be preserved because they were made by design instead of by evolution as we were—but things are moving faster than I expected. It’s a fascinating time to be living.
People who are selling something always do. So what are you selling?
but these thinking models aren't LLMs. yes they have an LLM component but they aren't llms they have a component that has "learned"(reinforcement learning) to search through the LLMs concepts/word space for ideas that have a high probability if yielding a result.
Just emulating reasoning, though it seems to produce better results... Probably in the same way that a better prompt produces better results
You’re still anthropomorphizing what these models are doing.
I'm confused by your reasoning. You say we've hit an inflection point and things seem different, so you've changed your mind. Yet then you say there's a long way to go and AIs will need to be embodied. So which is it, and did you paste this from an LLM?
Did they start correctly counting the number of 'R's in 'strawberry'?
I agree. The problem now seems to be agency and very long context (which is required for most problems in the real world).
Is that solvable? who knows?