Uh, this was exactly a "remix" of similar proofs that most likely were in the training data. It's just that some people misunderestimate how compelling that "remix" ability can be, especially when paired with a direct awareness of formal logical errors in one's attempted proof and how they might be addressed in the typical case.
Then what sort of math problem would be a milestone for you where an AI was doing something novel?
Or are you just saying that solving novel problems involves remixing ideas? Well, that's true for human problem solving too.