I wonder if the rationalizations people come up with for why this isn't real intelligence will be as creative as ChatGPTs solution.
Proving a negative is a pretty high bar. You also have the problem of defining "real intelligence", which I suspect you can't.
None of it is really from logical thought. The rationalizations don't make any sense, but they haven't for a while. It's an emotional response. Honestly, It's to be expected.
LLMs are definitely intelligent - just not general like humans, and very very jagged (succeedingand failing in head-scratching ways).
Well it still gets easy problems wrong
With real general intelligence you'd expect it to solve problems above a certain difficulty with a good clip
And how about the creative rationalizations about how statistical text generation is actual intelligence? As if there is any intent or motive behind the words that are generated or the ability to learn literally any new thing after it has been trained on human output?
For one, everything its 'intelligence' knows about solving the problem is contained within the finite context window memory buffer size for the particular model and session. Unless the memory contents of the context window are being saved to storage and reloaded later, unlike a human, it won't "remember" that it solved the problem and save its work somewhere to be easily referenced later.
<edit> My mistake. Responded to a bot but can't delete now. Sorry. <edit>
I think one day the VCs will have given the monkeys on typewriters enough money that these kinds of comments can be generated without human intervention.
[dead]
You're really telling on yourself if you think LLM is intelligence
This is real intelligence is the bear position, so I think it’s real intelligence.
Remember when people thought multiplying numbers, remembering a large number of facts, and being good at rote calculations was intelligence?
Some people think that multiplying numbers, remembering a large number of facts, and being good at calculations is intelligence.
Most intelligent people do not think that.
Eventually, we will arrive at the same conclusion for what LLMs are doing now.