No, I don't but it sounds very similar to the the naysayers that have silently moved the goalposts. That said, you're one of the few people in the wild that still claims LLMs are completely useless so I give you that.
> Models have gotten much better than even the most optimistic predictions.
We were promised Roko's Basilisk by now, damnit! Where's my magical robot god?!
But seriously, predictions a couple years back for 2026/27 (by quite big players, like Altman) were for AGI or as good as.
I do not, for the record, claim that they are totally useless. They are useful where correctness of results does not matter, for instance low-stakes natural language translation and spam generation. There's _some_ argument that they are somewhat useful in cases where their output can be reviewed by an expert (code generation etc), though honestly quantitive evidence there is mixed at best; for all the "10x developer" claims, there's not much in the way of what you'd call hard evidence.
You said:
> Models have gotten much better than even the most optimistic predictions.
We were promised Roko's Basilisk by now, damnit! Where's my magical robot god?!
But seriously, predictions a couple years back for 2026/27 (by quite big players, like Altman) were for AGI or as good as.
I do not, for the record, claim that they are totally useless. They are useful where correctness of results does not matter, for instance low-stakes natural language translation and spam generation. There's _some_ argument that they are somewhat useful in cases where their output can be reviewed by an expert (code generation etc), though honestly quantitive evidence there is mixed at best; for all the "10x developer" claims, there's not much in the way of what you'd call hard evidence.