This comment and your other comments are simply wrong and full of nonsense. Endgame table generators are pure solvers ... given enough time, they can solve chess from the initial position. But the amount of time is longer than the time until the heat death of the universe, and to record the best-play game tree--without which a solution isn't useful--would take more material than there is in the universe.
>This comment and your other comments are simply wrong and full of nonsense
That's because you are not understanding me.
My conviction is the complexity of chess is not as high as we think, and that there exist a neural network of less than 10B float32 weights which can encode perfect play.
Neural network evaluations are now used as heuristic to evaluate the position and even without tree searching they play very well, but they are usually very small and complemented by a few high level features as input. A bigger network, well fine-tuned can probably reach perfect play without the need of tree searching.
A thought exercise is trying to compress endgame table with a neural network and see how big we need it to be in order to reach perfect play. The thing is : you don't need to train it on all games from the endgame table before it converges to perfect play.
You can know how close you are to the optimal, by counting the number of Bellman equation violation (or not observing them)
You can even train it by having it referencing the previously trained oracle of endgame table chess. You solve chess with a neural network when there are only 2 pieces. Then you solve chess for 3 pieces eventually using the chess for 2 pieces oracle. Then you solve chess for 4 pieces using the chess for 3 pieces oracle, and so on ... until you reach chess for 32 pieces.
Adding pieces up only increase complexity up to a point.