"I know I sound like an asshole, but I’ve got a serious question: what can LLMs do today that they couldn’t a year ago? Agents don’t work. LLMs - read stuff, write stuff, analyze stuff, search for stuff, 'write code' and generate images and video. And in all of these cases, they get things wrong."
This is obviously supposed to be a critique, but a year ago he would never have admitted LLMs can do any of these things, even with errors. This seems strange but it's typical of Zitron's writing, which is often incoherent in service of sounding as negative as possible. A couple of other examples I've written about are his claims about the "cost of inference" going up and about Anthropic allegedly screwing over Cursor by raising prices on them:
In fact I do!
"I know I sound like an asshole, but I’ve got a serious question: what can LLMs do today that they couldn’t a year ago? Agents don’t work. LLMs - read stuff, write stuff, analyze stuff, search for stuff, 'write code' and generate images and video. And in all of these cases, they get things wrong."
https://bsky.app/profile/edzitron.com/post/3ma2b2zvpvk2n
This is obviously supposed to be a critique, but a year ago he would never have admitted LLMs can do any of these things, even with errors. This seems strange but it's typical of Zitron's writing, which is often incoherent in service of sounding as negative as possible. A couple of other examples I've written about are his claims about the "cost of inference" going up and about Anthropic allegedly screwing over Cursor by raising prices on them:
https://crespo.business/posts/cost-of-inference/
https://news.ycombinator.com/item?id=45645714