> Poor man’s version of this which requires no training would be to evaluate positions at low depth and high depth and select positions where the best move switches.
I've looked at the ~same problem in go instead of chess quite a bit. It turns out that that strategy does remarkably poorly, mostly because weak players don't just have low-depth thinking, they also do random weird shit and miss entire patterns. Eg in go if you run a strong go engine at _zero_ thinking (just take the model's best guess first move), it's already damn close to pro level.
Getting them to play anything like a beginner, or even intermediate player is functionally impossible. You _may_ be able to trick one into playing as weak as a beginner player if you really try, but it would feel _nothing_ like one.
> Training neural nets to model behavior at different levels is also possible but high rated players are inherently more difficult to model.
Maybe more difficult to model (not sure tbh, but granted for the moment), but it's _far_ easier to generate training data for strong players via reinforcement learning approaches.
In go, I remember that there were rare positions/puzzles where old (pre-mcts) algorithms did better than humans, and that was artificial positions where there were a lot of complex recaptures in the same area. Which is rare in go, but common in chess.
I do think that positions where the best move at depth N is a terrible move at N+1 are hard, especially if it isn't just recaptures on the same square. Chess engines used to have special heuristics to avoid such "horizon effects", don't know if they still do.