I'm not accusing you in particular, but I feel like there's a lot of circular reasoning around this point. Something like: AI can't discover "new math" -> AI discovers something -> since it was discovered by AI it must not be "new math" -> AI can't discover "new math"
For example, there was a recent post here about GPT-5.4 (and later some other models) solving a FrontierMath open problem: https://news.ycombinator.com/item?id=47497757
That would definitely be considered "new math" if a human did it, but since it was AI people aren't so sure.
There is a kind of rubrik I use on stuff like this. If LLMs are discovering new math, why have I only read one or two articles where it's happening? Wouldn't it be happening with regularity?
The most obvious example of this thinking is, if LLMs are replacing developers, why us open ai still hiring?