Just tell one funny thing an LLM said...
Yesterday it was "LLM's can't count R's in 'strawberry'." Today it's "LLM's can't tell jokes". Tomorrow it might be "LLM's can't do (X)", all while LLMs get better and better at every objection/challenge posed.
The problem as I see it is that you have a fundamental objection to categorizing the way LLMs do their work as in any way related to "real gosh-darn human thinking". Which I think is wrong. At the root, we are just information-processing meat that happens to have had millions of years to optimize for speed, pattern recognition, feedback, etc.
Lots of examples here:
https://news.ycombinator.com/item?id=46205632