In fact I do!
"I know I sound like an asshole, but I’ve got a serious question: what can LLMs do today that they couldn’t a year ago? Agents don’t work. LLMs - read stuff, write stuff, analyze stuff, search for stuff, 'write code' and generate images and video. And in all of these cases, they get things wrong."
https://bsky.app/profile/edzitron.com/post/3ma2b2zvpvk2n
This is obviously supposed to be a critique, but a year ago he would never have admitted LLMs can do any of these things, even with errors. This seems strange but it's typical of Zitron's writing, which is often incoherent in service of sounding as negative as possible. A couple of other examples I've written about are his claims about the "cost of inference" going up and about Anthropic allegedly screwing over Cursor by raising prices on them:
I don't know how far back you're intending to go on Zitron, but I listened a bit to him about 8 months ago, and I got the impression then that his opinion was exactly the same as what he's bringing to the table in that quote. The AI can "do" whatever you believe it does, but it does it so poorly that it's not doing it in any worthwhile sense of the word.
I could of course be projecting my opinions onto him, but I don't think your characterization of him is accurate. Feel free to provide receipts that show my impression of his opinion to be wrong though.