What does LLM need to do for you to consider it "smart"?
To me they seem to be pretty damn smart, to put it mildly. They sometimes do stupid things - but so do smart people!
How about writing "all code" this June, as Dario Amodei announced in January this year?
> To me they seem to be pretty damn smart
That's the sorcery mentioned in the GP, the issue comes when people believe it to be smart however in reality it is just a next word prediction. Gives the impression it's actually thinking, and this is by design. Personally I think it's dangerous in the sense it gives users a false sense of confidence in the LLM and so a LOT of people will blindly trust it. This isn't a good thing.
Are they smart or are they imitating things smart people did? (and if so, is there a difference?)
They aren’t smart, they approximate language constructs. They don’t have believes, ideas, etc. but have a few rounds of discussions with any LLMs and you see how they are probabilistic autocompletes based on whatever patterns from rounds of discussions you feed them
LLMs are amazing. You can call them 'smart', but they're not intelligent and never will be.
They are useful but a cul de sac for heading toward AGI.
Not OP, but I think the argument here would be not that LLMs "are not smart" but that smart is just the wrong category of thing to describe an LLM as.
A calculator can do very complex sums very quickly, but we don't tend to call it "smart" because we don't think it's operating intelligently to some internal model of the world. I think the "LLMs are AGI" crowd would say that LLMs are, but it's perfectly consistent to think the output of LLMs is consistent/impressive/useful, but still maintain that they aren't "smart" in any meaningful way.