We should have solved chess already.
We should be aiming to solve chess, but we are not even trying.
If the complexity of chess is finite this is possible.
Chess AI have become so good that maybe there is no more progress to be made.
We must abandon the concept of "valuation".
Here is the criterium for solving chess.
A chess engine is a constant (or bounded) time, (and memory bounded) function f which takes any position and return either 1 "white win", 0 "draw", -1 "white lose".
A chess engine solve chess if we can't find any violation in the Bellman equation :
f(gamestate) = max over legal moves of ( -f(gamestate+move) ) if there are legal moves or f(gamestate) = result if there are no legal moves
This function f can be encoded either by a neural network or some code, but it must be computable in constant (or bounded) time.
The whole problem of solving chess is now simplified to mining the gamestate space for counter examples.
Like in math you can conjecture that your chess engine has solved the game and it stands until someone else find a violation of your game engine (either it doesn't compute in bounded time, or the bellman equation is invalid).
To verify a chess engine you can just sample a random gamestate, or a known to be difficult gamestate and check if the equation holds, that's at most max number of legal moves + 1, function evaluation.
You can try applying this concept to simple games like tic-tac-toe, or trying to compress endgame table with a neural network.
We can find such a candidate function by training a neural network by minimizing the current expected number of violation in the bellman equation over various dataset of gamestate. Once we "grok" it to zero we are done. To soften the problem you can train an auxilliary continuous function g which output the probability of 1, 0, or -1 (with a softmax) and add a discount factor like in Reinforcement Learning, but the final result is the discrete argmax of it (like in "deterministic policy gradient") with no discount factor.
Once you have this finite time oracle, playing chess is just wandering along the graph of drawable position to push your adversary into a position where his engine will have a violation of the bellman equation aka a mis-evaluation of the position where if he goes into it you will get into the winnable positions where you stay until the end. (Not all violations can be exploited as the adversary can sometime avoid going into "uncertain" positions).
A simpler (less strong) chess strategy may avoid going into "uncertain" positions as long as the position is unreachable because previous legal positions have a preferred choice. (4 state logic with win, draw, lose, uncertain). Or ordered list of moves. They make the problem easier but complexify the corresponding bellman equation.
So the game becomes once again exploring the space of "drawable" game positions in order to find positions which the adversary can't escape going into a uncertain (and losing) position. Aka playing the adversary and not the board which is an harder problem except if the adversary can play perfectly. Aka playing for the win.
> We should be aiming to solve chess, but we are not even trying.
We know exactly how to solve chess, we have known for decades. The function f is called min-max, and can be optimized with things like alpha pruning. Given that chess is a bounded game, this is a bounded time and bounded space algorithm. The definition of `f` that you gave can actually be quite directly encoded in Haskell and executed (though it will miss some obvious optimizations).
The problem is that this algorithm seems to be close to optimal and it would still take some few thousand years of computation time to actually run it to solve chess (or was it decades, or millions of years? not really that relevant).
Now, of course, no one has actually proved that this is the optimal algorithm, so for all we know, there exists a much simpler `f` that could take milliseconds on a pocket calculator. But this seems unlikely given the nature of the problem, and either way, it's just not that interesting for most people to put the kind of deep mathematical research into it that it would take.
Solving chess is not really a very interesting problem as pure mathematics. The whole interest was in beating human players with human-like strategies, which has thoroughly been achieved. The most interesting thing that remains is challenging humans that like chess at their own level of gameplay - since ultimately the only true purpose of chess is to entertain humans, and a machine that plays perfectly is actually completely unfun to play.
> We should have solved chess already. > We should be aiming to solve chess, but we are not even trying.
We are trying and we should not expect to solve it because the search space is massive. There are more legal games of chess than atoms in the universe.
> If the complexity of chess is finite this is possible.
This is like saying the distance to Mars is finite so it should be possible to just go and live there. It's not theoretically impossible, but at the time it is practically impossible. There are real engineering challenges between here and there that have not yet been solved and from the looks of it, it will be at least several more decades before we get there.
In your example, you gloss over the construction of a "finite time oracle" by just saying "just train it until it is perfect". If humanity knew how to do that we could solve a whole other (more interesting) set of problems, but we don't.
I've got to ask, do you play much chess? Because this post reads like you don't understand much about chess.
The issue with "solving" chess isn't that there isn't an objectively best move in every position. The issue is that calculating that move is functionally impossible for most positions. That's because chess gets exponentially more complicated the more pieces there are on the board. For example, there are around 26 million positions with 5 or fewer pieces, and over 3.7 billion positions with 6 or fewer pieces.
And most of those positions are distinct. This isn't like a Rubik's cube, where there are a lot of functionally identical positions. Any experienced chess player can tell you that a single piece placement can be the difference between a winning position, and a losing position.
And this complexity is what I love about chess! I love the fact that I can enter positions that no one has ever encountered before just by playing some chess online. I love the fact that deep analysis is possible, but that the sheer size of the possibility space means we can never truly solve chess. Chess strikes a perfect balance of complexity. Any simpler, and evaluating the best move would be too easy. Any more complicated, and evaluation becomes so difficult that it's hardly worth trying.
Which isn't to say that we can't build computers that are very good at chess. A person hasn't beaten a top computer in decades. Magnus Carlson is probably the greatest chess player to have ever lived, and you can run software on your phone that could easily beat him. But there's a wide gulf between "can beat every human alive" and "plays objectively perfect chess."
"We should have solved Busy Beaver, if the complexity of BB(n) is finite this is possible."
I mean yeah, chess isn't THAT bad but still not directly tractable and besides, brute forcing it is boring.
This comment and your other comments are simply wrong and full of nonsense. Endgame table generators are pure solvers ... given enough time, they can solve chess from the initial position. But the amount of time is longer than the time until the heat death of the universe, and to record the best-play game tree--without which a solution isn't useful--would take more material than there is in the universe.