There is an important implication of learning and indexing being equivalent problems. A number of important data models and data domains exist for which we do not know how to build scalable indexing algorithms and data structures.
It has been noted for several years in US national labs and elsewhere that there is an almost perfect overlap between data models LLMs are poor at learning and data models that we struggle to index at scale. If LLMs were actually good at these things then there would be a straightforward path to addressing these longstanding non-AI computer science problems.
The incompleteness is that the LLM tech literally can't represent elementary things that are important enough that we spend a lot of money trying to represent them on computers for non-AI purposes. A super-intelligent AGI being right around the corner implies that we've solved these problems that we clearly haven't solved.
Perhaps more interesting, it also implies that AGI tech may look significantly different than the current LLM tech stack.