This reminds me of this nice video : https://www.youtube.com/watch?v=YGLNyHd2w10
basically, the state space of the game can produce some intuition on why certain games are hard, and is demonstrated as a clustering of various states that are only linked to the winning state by a very small number of edges. So a player can easily "get lost" in the maze.
That video has become an instant classic to me when it came out, very well made and explained. Very much influencing my thoughts as of late. Related to the article, I've also spent a few months before working on chess line/complexity visualization and explanation as a side project. The main outcome of this was https://www.schachzeit.com/en/openings/sicilian-defense. I haven't touched it beyond keeping it barely running for most of 2025, so a strong breeze would surely knock it over, but the experience of making it did teach me a lot about chess and computerized chess.
The way the table at the bottom of these pages works is by running stockfish on not only the current position, but on the position after every possible legal move the first player could make. This increases the amount of computation needed by one or two orders of magnitude, as stockfish often does not explore every possible legal move so deeply. I was only doing it for lichess' openings lines, and it cost maybe $200 worth of computer processing time (can't recall the exact numbers).
As far as I know (and I'm only someone who has spent a couple dozen hours with stockfish, a stockfish dev would know more) stockfish does not allow you easy access to the hash table that contains all the evaluations. I could not figure out how to get it to print out it's hash tables at all in order to get at the values deeper down in the complexity.
I've considered forking stockfish and trying to get those values to be accessible somehow, in order to try and make visualizations like that video, and maybe someday I will. For now though, I think that's the technical piece missing, because otherwise one is forced to recompute every single node of the graph at the desired depth rather than running it once and getting back the values that have already been computed. Maybe someday, when I have more time, I'll take a shot at it, but not anytime soon I don't think.