logoalt Hacker News

saaaaaamyesterday at 11:16 PM13 repliesview on HN

“Time-locked models don't roleplay; they embody their training data. Ranke-4B-1913 doesn't know about WWI because WWI hasn't happened in its textual universe. It can be surprised by your questions in ways modern LLMs cannot.”

“Modern LLMs suffer from hindsight contamination. GPT-5 knows how the story ends—WWI, the League's failure, the Spanish flu.”

This is really fascinating. As someone who reads a lot of history and historical fiction I think this is really intriguing. Imagine having a conversation with someone genuinely from the period, where they don’t know the “end of the story”.


Replies

jscyctoday at 12:07 AM

When you put it that way it reminds me of the Severn/Keats character in the Hyperion Cantos. Far-future AIs reconstruct historical figures from their writings in an attempt to gain philosophical insights.

show 3 replies
pwillia7today at 11:38 AM

This is why the impersonation stuff is so interesting with LLMs -- If you ask chatGPT a question without a 'right' answer, and then tell it to embody someone you really want to ask that question to, you'll get a better answer with the impersonation. Now, is this the same phenomenon that causes people to lose their minds with the LLMs? Possibly. Is it really cool asking followup philosophy questions to the LLM Dalai Lama after reading his book? Yes.

culitoday at 2:18 AM

I used to follow this blog — I believe it was somehow associated with Slate Star Codex? — anyways, I remember the author used to do these experiments on themselves where they spent a week or two only reading newspapers/media from a specific point in time and then wrote a blog about their experiences/takeaways

On that same note, there was this great YouTube series called The Great War. It spanned from 2014-2018 (100 years after WW1) and followed WW1 developments week by week.

show 2 replies
takedatoday at 6:05 PM

> This is really fascinating. As someone who reads a lot of history and historical fiction I think this is really intriguing. Imagine having a conversation with someone genuinely from the period, where they don’t know the “end of the story”.

Having the facts from the era is one thing, to make conclusions about things it doesn't know would require intelligence.

show 1 reply
ghurtadotoday at 5:33 AM

This might just be the closest we get to a time machine for some time. Or maybe ever.

Every "King Arthur travels to the year 2000" kinda script is now something that writes itself.

> Imagine having a conversation with someone genuinely from the period,

Imagine not just someone, but Aristotle or Leonardo or Kant!

show 1 reply
observationistyesterday at 11:59 PM

This is definitely fascinating - being able to do AI brain surgery, and selectively tuning its knowledge and priors, you'd be able to create awesome and terrifying simulations.

show 3 replies
psychoslavetoday at 11:27 AM

>Imagine having a conversation with someone genuinely from the period, where they don’t know the “end of the story”.

Isn't this part of the basics feature of human conditions? Not only we are all unaware of the coming historic outcome (though we can get some big points with more or less good guesses), but to a marginally variable extend, we are also very unaware of past and present history.

LLM are not aware, but they can be trained on larger historical accounts than any human and regurgitate syntactically correct summary on any point within it. Very different kind of utterer.

show 1 reply
ViktorRaytoday at 2:56 PM

Reminds me of this scene from a Doctor Who episode

https://youtu.be/eg4mcdhIsvU

I’m not a Doctor Who fan and haven’t seen the rest of the episode and I don’t even what this episode was about but I thought this scene was excellent.

anshumankmrtoday at 1:22 PM

>where they don’t know the “end of the story”.

Applicable to us also, cause we do not know how the current story ends either, of the post pandemic world as we know it now.

show 1 reply
xg15today at 12:03 AM

"...what do you mean, 'World War One?'"

show 3 replies
Sieyktoday at 5:14 AM

I was going to say the same thing. Its really hard to explain the concept of "convincing but undoubtedly pretending", yet they captured that concept so beautifully here.

Davidbrcztoday at 7:29 AM

That's some Westworld level of discussion

rcpttoday at 4:30 AM

Watching a modern LLM chat with this would be fun.