I have been using 4.6 on Cerebras (or Groq with other models) since it dropped and it is a glimpse of the future. If AGI never happens but we manage to optimise things so I can run that on my handheld/tablet/laptop device, I am beyond happy. And I guess that might happen. Maybe with custom inference hardware like Cerebras. But seeing this generate at that speed is just jaw dropping.
Cerebras and Groq both have their own novel chip designs. If they can scale and create a consumer friendly product that would be a great, but I believe their speeds are due to them having all of their chips networked together, in addition to design for LLM usage. AGI will likely happen at the data center level before we can get on-device performance equivalent to what we have access to today (affordably), but I would love to be wrong about that.