Can you be specific?
He recently posted a question he put to grok3 — a variation on the trick LLM question (my characterization) of "count the number of this letter in this word." Apparently this Achilles heel is a well-known LLM shortcoming.
Weirdly though, I tried the same example he gave on lmarena and actually got the correct result from grok3, not what Gary got. So I am a little suspicious of his ... methodology?
Since LLMs are not deterministic it's possible we are both right (or were testing different variations on the model?). But there's a righteousness about his glee in finding these faults in LLMs. Never hedging with, "but your results may vary" or "but perhaps they will soon be able to accomplish this."
EDIT: the exact prompt (his typo 'world'): "Can you circle all the consonants in the world Chattanooga"
Garry Marcus constantly repeats the line that "deep learning has hit a wall!1!" - he was saying this pre-ChatGPT even! It's very easy to dunk on him for this.
That said, his willingness to push back against orthodoxy means he's occasionally right. Scaling really has seemed to plateau since GPT-3.5, Hallucinations are still a problem that are perhaps unsolvable under the current paradigm, LLMs do seem to have problems with things far outside their training data.
Basically, while listening to Gary Marcus, you will hear a lot of nonsense, it will probably give you a better picture of reality if you can sort the wheat from the chaff. Listening to only Sam Altman, or other AI Hypelords, you'll think the Singularity is right around the corner. Listen to Gary Marcus, you won't.
Sam Altman has been substantially more correct on average than Gary Marcus, but I believe Marcus is right that the Singularity narrative is bogus.