A bit related: open weights models are basically time capsules. These models have a knowledge cut off point and essentially forever live in that time.
This is the most fundamental argument that they are not, directly, an intelligence. They are not ever storing new information on a meaningful timescale. However, if you viewed them on some really large macro time scale where now LLMs are injecting information into the universe and the re-ingesting that maybe in some very philosophical way they are a /very/ slow oscillating intelligence right now. And as we narrow that gap (maybe with a totally new non-LLM paradigm) perhaps that is ultimately what gen AI becomes. Or some new insight that lets the models update themselves in some fundamental way without the insanely expensive training costs they have now.
I enjoyed chatting to Opus 3 recently around recent world events, as well as more recent agentic development patterns etc
Not an expert but surely it's only a matter of time until there's a way to update with the latest information without having to retrain on the entire corpus?