“Real problems” aren’t something that can be effectively discussed in the time span of an interview, so companies concoct unreal problems that are meant to be good indicators.
Really? How short are your interviews, and how big are these Real Problems such that you can't get a sense of how your candidate would start to tackle them?
On that, these unreal questions/problems are decent proxies for general knowledge for humans, but not for AI. Humans don't have encyclopedic knowledge, so questions on a topic can do a decent job of indicating a person has the broader depth of knowledge in that topic and could bring that to bear in a job. An AI can answer all the questions but can't bring that to bear in a job.
WE saw this last year with all the "AI can now pass the bar exam" articles, but that doesn't lead to them being able to do anything approaching practicing law, because AI failure modes are not the same as humans and can't be tested the same way.