> We should be aiming to solve chess, but we are not even trying.
We know exactly how to solve chess, we have known for decades. The function f is called min-max, and can be optimized with things like alpha pruning. Given that chess is a bounded game, this is a bounded time and bounded space algorithm. The definition of `f` that you gave can actually be quite directly encoded in Haskell and executed (though it will miss some obvious optimizations).
The problem is that this algorithm seems to be close to optimal and it would still take some few thousand years of computation time to actually run it to solve chess (or was it decades, or millions of years? not really that relevant).
Now, of course, no one has actually proved that this is the optimal algorithm, so for all we know, there exists a much simpler `f` that could take milliseconds on a pocket calculator. But this seems unlikely given the nature of the problem, and either way, it's just not that interesting for most people to put the kind of deep mathematical research into it that it would take.
Solving chess is not really a very interesting problem as pure mathematics. The whole interest was in beating human players with human-like strategies, which has thoroughly been achieved. The most interesting thing that remains is challenging humans that like chess at their own level of gameplay - since ultimately the only true purpose of chess is to entertain humans, and a machine that plays perfectly is actually completely unfun to play.
To be fair, if you take your agument from the last paragraph, i.e. that the function of chess as the game is to entertain, your earlier argument re: min-max doesn't really stand, does it? I think, you're right that chess probably quite interesting in terms of abstract maths, like surely there are ways to represent the pawns (pawn structures?) as well as the pieces (knights, bishops, etc.) in terms of some supersymmetry. However, it doesn't seem like much progress has been made in this area academically since the 20th century. It may be helpful to tap into AlphaFold and related results for interpretability! Stockfish has incorporated some probabilistic programming (neural network-based) but it's comparatively small-scaled, and behind SOTA of the bleeding-edge Transformer architectures (in terms of interpretability, nonetheless!) Surely, if we can't get supersymmetries in some complex forms, we could get ahead with the modern interpretability and RL techniques. Given the appropriate knowledge representation, by combining self-play with known playing sequences and behaviours by forcing the model into known lines, & perhaps partitioning by player styles so there's incentives for the model to learn some style feature, it should be possible for it to learn what we refer to as the essence of the game, i.e. archetypal human playing styles comfortably. Using insights learned from interpretability, it should be possible to further influence the model during inference.
If they were to get to that point, we could say that chess would be solved...