“Time-locked models don't roleplay; they embody their training data. Ranke-4B-1913 doesn't know about WWI because WWI hasn't happened in its textual universe. It can be surprised by your questions in ways modern LLMs cannot.”
“Modern LLMs suffer from hindsight contamination. GPT-5 knows how the story ends—WWI, the League's failure, the Spanish flu.”
This is really fascinating. As someone who reads a lot of history and historical fiction I think this is really intriguing. Imagine having a conversation with someone genuinely from the period, where they don’t know the “end of the story”.
This is why the impersonation stuff is so interesting with LLMs -- If you ask chatGPT a question without a 'right' answer, and then tell it to embody someone you really want to ask that question to, you'll get a better answer with the impersonation. Now, is this the same phenomenon that causes people to lose their minds with the LLMs? Possibly. Is it really cool asking followup philosophy questions to the LLM Dalai Lama after reading his book? Yes.
I used to follow this blog — I believe it was somehow associated with Slate Star Codex? — anyways, I remember the author used to do these experiments on themselves where they spent a week or two only reading newspapers/media from a specific point in time and then wrote a blog about their experiences/takeaways
On that same note, there was this great YouTube series called The Great War. It spanned from 2014-2018 (100 years after WW1) and followed WW1 developments week by week.
> This is really fascinating. As someone who reads a lot of history and historical fiction I think this is really intriguing. Imagine having a conversation with someone genuinely from the period, where they don’t know the “end of the story”.
Having the facts from the era is one thing, to make conclusions about things it doesn't know would require intelligence.
This might just be the closest we get to a time machine for some time. Or maybe ever.
Every "King Arthur travels to the year 2000" kinda script is now something that writes itself.
> Imagine having a conversation with someone genuinely from the period,
Imagine not just someone, but Aristotle or Leonardo or Kant!
This is definitely fascinating - being able to do AI brain surgery, and selectively tuning its knowledge and priors, you'd be able to create awesome and terrifying simulations.
>Imagine having a conversation with someone genuinely from the period, where they don’t know the “end of the story”.
Isn't this part of the basics feature of human conditions? Not only we are all unaware of the coming historic outcome (though we can get some big points with more or less good guesses), but to a marginally variable extend, we are also very unaware of past and present history.
LLM are not aware, but they can be trained on larger historical accounts than any human and regurgitate syntactically correct summary on any point within it. Very different kind of utterer.
Reminds me of this scene from a Doctor Who episode
I’m not a Doctor Who fan and haven’t seen the rest of the episode and I don’t even what this episode was about but I thought this scene was excellent.
>where they don’t know the “end of the story”.
Applicable to us also, cause we do not know how the current story ends either, of the post pandemic world as we know it now.
I was going to say the same thing. Its really hard to explain the concept of "convincing but undoubtedly pretending", yet they captured that concept so beautifully here.
That's some Westworld level of discussion
Watching a modern LLM chat with this would be fun.
When you put it that way it reminds me of the Severn/Keats character in the Hyperion Cantos. Far-future AIs reconstruct historical figures from their writings in an attempt to gain philosophical insights.