I don’t see how that analogy makes any sense. We’re not talking about containers of a known and fixed size here, nor a single technique, nor a single method. Stuff like LLMs using Transformer architectures might have reached a plateau, for instance. But there’s tons of techniques _around_ those models that keep making them more capable (o1, etc), and also other architectures.