LLMs are bad at creating abstraction boundaries since inception. People have been calling it out since inception. (Heck, even I got a twitter post somewhere >12 months old calling that out, and I'm not exactly a leading light of the effort)
It is in no way size-related. The technology cannot create new concepts/abstractions, and so fails at abstraction. Reliably.
I believe his argument is that now that you've defined the limitation, it's a ceiling that will likely be cracked in the relatively near future.
There’s only one way to implement a mission, an algorithm, a task. But there’s an infinity of path, inconsistants, fuzzy and always subjective way to live. Thàt’s our lives, that’s the code LLM are trained on. I do not think, and hope, it will ever change much
> The technology cannot create new concepts/abstractions, and so fails at abstraction. Reliably.
That statement is way too strong, as it implies either that humans cannot create new concepts/abstractions, or that magic exists.