I think models should be “forked”, and learn from subsets of input and themselves. Furthermore, individuals (or at least small groups) should have their own LLMs.
Sameness is bad for an LLM like it’s bad for a culture or species. Susceptible to the same tricks / memetic viruses / physical viruses, slow degradation (model collapse) and no improvement. I think we should experiment with different models, then take output from the best to train new ones, then repeat, like natural selection.
And sameness is mediocre. LLMs are boring, and in most tasks only almost as good as humans. Giving them the ability to learn may enable them to be “creative” and perform more tasks beyond humans.