It didn't have to, not explicitly. The tone and the context already hint at that - if you saw someone creating a fake cover of an existing periodical but 10 years into the future, you'd likely assume it's part of some joke or a commentary related to said periodical, and not a serious attempt at predicting the future. And so would an LLM.
People keep forgetting (or worse, still disbelieving) that LLMs can "read between the lines" and infer intent with good accuracy - because that's exactly what they're trained to do[0].
Also there's prior art for time-displaced HN, and it's universally been satire.
--
[0] - The goal function for LLM output is basically "feels right, makes sense in context to humans" - in fully general meaning of that statement.
It didn't have to, not explicitly. The tone and the context already hint at that - if you saw someone creating a fake cover of an existing periodical but 10 years into the future, you'd likely assume it's part of some joke or a commentary related to said periodical, and not a serious attempt at predicting the future. And so would an LLM.
People keep forgetting (or worse, still disbelieving) that LLMs can "read between the lines" and infer intent with good accuracy - because that's exactly what they're trained to do[0].
Also there's prior art for time-displaced HN, and it's universally been satire.
--
[0] - The goal function for LLM output is basically "feels right, makes sense in context to humans" - in fully general meaning of that statement.