And why would it materialize? Anyone who has used even modern models like Opus 4.6 in very long and extensive chats about concrete topics KNOWS that this LLM form of Artificial Intelligence is anything but intelligent.
You can see the cracks happening quite fast actually and you can almost feel how trained patterns are regurgitated with some variance - without actually contextualizing and connecting things. More guardrailing like web sources or attachments just narrow down possible patterns but you never get the feeling that the bot understands. Your own prompting can also significantly affect opinions and outcomes no matter the factual reality.
The great irony is this episode is exposing those who are truly intelligent and those who are not.
Folks feel free to screenshot this ;)