>This comment and your other comments are simply wrong and full of nonsense
That's because you are not understanding me.
My conviction is the complexity of chess is not as high as we think, and that there exist a neural network of less than 10B float32 weights which can encode perfect play.
Neural network evaluations are now used as heuristic to evaluate the position and even without tree searching they play very well, but they are usually very small and complemented by a few high level features as input. A bigger network, well fine-tuned can probably reach perfect play without the need of tree searching.
A thought exercise is trying to compress endgame table with a neural network and see how big we need it to be in order to reach perfect play. The thing is : you don't need to train it on all games from the endgame table before it converges to perfect play.
You can know how close you are to the optimal, by counting the number of Bellman equation violation (or not observing them)
You can even train it by having it referencing the previously trained oracle of endgame table chess. You solve chess with a neural network when there are only 2 pieces. Then you solve chess for 3 pieces eventually using the chess for 2 pieces oracle. Then you solve chess for 4 pieces using the chess for 3 pieces oracle, and so on ... until you reach chess for 32 pieces.
Adding pieces up only increase complexity up to a point.
I do understand you, but you and your "conviction" are wrong. Apparently you aren't even familiar with AlphaZero.
> Adding pieces up only increase complexity up to a point.
As you said, until you reach 32 pieces. What you vastly underestimate is how much complexity is added at each level. You're like the king in the fable who agreed to give the vizier a small amount of grain: 1 grain for the first square, 2 grains for the second square, 4 grains for the third square, etc. The king thought he was getting a bargain.
> The thing is : you don't need to train it on all games from the endgame table before it converges to perfect play.
But you do, because there is no algorithmic simplification, at all. Strong chess players understand that while there are common patterns throughout chess, their application is highly specific to the position. That's why we have endgame tables, which are used to solve positions that pattern matching doesn't solve. You can get excellent play out of an NN, but that's not the same as solving it. And the absence of Bellman violations is necessary, but not sufficient ... you can't use it to prove that you've solved chess. The fact is that it is impossible within pragmatic limits to prove that chess has been solved. But so what? Programs like AlphaZero and Stockfish already play well enough for any purpose.
Anyway, you're free to go implement this ... good luck. I won't respond further.
> My conviction is the complexity of chess is not as high as we think, and that there exist a neural network of less than 10B float32 weights which can encode perfect play
Given the number of possible games (numbers above 10^80) you would need EXTREME sparsity to encode it in less than 10B / 10^10 params. Sounds information theoretically impossible to me ¯\_(ツ)_/¯
Leela Chess Zero has hundreds of millions to few billion parameters AFAIK.
The argument about the game being finite and thus solvable is misguided IMO.
AES encryption is also finite, and you can enumerate all possible combinations, but not before the end of time..
>the complexity of chess is not as high as we think, and that there exist a neural network of less than 10B float32 weights which can encode perfect play.
That is certainly not the mainstream view, so you will have to support your conjecture by some evidence if you would like to convince people that you are right (or demonstrate it empirically end-to-end).