The author is looking for positions which are difficult for low rated players and easier for high rated players.
Poor man’s version of this which requires no training would be to evaluate positions at low depth and high depth and select positions where the best move switches.
Training neural nets to model behavior at different levels is also possible but high rated players are inherently more difficult to model.
> high rated players are inherently more difficult to model
yes and no. there's a bit of a bathtub curve here where below ~1000 elo lower ratings are harder to predict because their moves are closer to random.
Seems like an excellent definition, mainly because I have no idea how to measure it otherwise. "Complexity is the time needed to solve a problem", why not.
I had this idea of drilling games against an engine with a set depth evaluation, since beating a depth 1 engine should teach simpler concepts than level 4.
I vibe coded this into a browser app, but the evaluation is slow around depth 5: https://camjohnson26.github.io/chess-trainer/
evaluate positions at low depth and high depth and select positions where the best move switches.
Won’t that bias in favour of common sacrifices in the endgame? The issue with using depth is that humans don’t struggle with depth uniformly. Humans can calculate to a lot more depth with a sequence of obvious moves (recaptures, pawn races) than they can in more complex situations (closed positions in the midgame).
> Poor man’s version of this which requires no training would be to evaluate positions at low depth and high depth and select positions where the best move switches.
I've looked at the ~same problem in go instead of chess quite a bit. It turns out that that strategy does remarkably poorly, mostly because weak players don't just have low-depth thinking, they also do random weird shit and miss entire patterns. Eg in go if you run a strong go engine at _zero_ thinking (just take the model's best guess first move), it's already damn close to pro level.
Getting them to play anything like a beginner, or even intermediate player is functionally impossible. You _may_ be able to trick one into playing as weak as a beginner player if you really try, but it would feel _nothing_ like one.
> Training neural nets to model behavior at different levels is also possible but high rated players are inherently more difficult to model.
Maybe more difficult to model (not sure tbh, but granted for the moment), but it's _far_ easier to generate training data for strong players via reinforcement learning approaches.