I think all of those are terrible indicators, 1 and 2 for example only measure how well LLMs can handle long context sizes.
If a movie or novel is famous the training data is already full of commentary and interpretations of them.
If its something not in the training data, well I don't know many movies or books that use only motives that no other piece of content before them used, so interpreting based on what is similar in the training data still produces good results.
EDIT: With 1 I meant using a transcript of the Audio Description of the movie. If he really meant watch a movie I'd say thats even sillier because well of course we could get another Agent to first generate the Audio Description, which definitely is possible currently.
Just yesterday I saw an article about a police station's AI body cam summarizer mistakenly claim that a police officer turned into a frog during a call. What actually happened was that the cartoon "princess and the frog" was playing in the background.
Sure, another model might have gotten it right, but I think the prediction was made less in the sense of "this will happen at least once" and more of "this will not be an uncommon capability".
When the quality is this low (or variable depending on model) I'm not too sure I'd qualify it as a larger issue than mere context size.