This is just trivia. I would not use it to test computers -- or humans.
It's good way to assess the model with respect to hallucinations though.
I don't think a model should know the answer, but it must be able to know that it doesn't know if you want to use it reliably.
Everything is just trivia until you have a use for the answer.
OP provided a we link with the answer, aren't these models supposed to be trained on all of that data?
It's good way to assess the model with respect to hallucinations though.
I don't think a model should know the answer, but it must be able to know that it doesn't know if you want to use it reliably.