That was an impressive result, but AIUI not an example of "coming up with and proving (preferably numerous) significant new theorems without human guidance".
For one thing, the output was an algorithm, not a theorem (except in the Curry-Howard sense). More importantly though, AlphaEvolve has to be given an objective function to evaluate the algorithms it generates, so it can't be considered to be working "without human guidance". It only uses LLMs for the mutation step, generating new candidate algorithms. Its outer loop is a an optimisation process capable only of evaluating candidates according to the objective function. It's not going to spontaneously decide to tackle the Langlands program.
Correct me if I'm wrong about any of the above. I'm not an expert on it, but that's my understanding of what was done.
That was an impressive result, but AIUI not an example of "coming up with and proving (preferably numerous) significant new theorems without human guidance".
For one thing, the output was an algorithm, not a theorem (except in the Curry-Howard sense). More importantly though, AlphaEvolve has to be given an objective function to evaluate the algorithms it generates, so it can't be considered to be working "without human guidance". It only uses LLMs for the mutation step, generating new candidate algorithms. Its outer loop is a an optimisation process capable only of evaluating candidates according to the objective function. It's not going to spontaneously decide to tackle the Langlands program.
Correct me if I'm wrong about any of the above. I'm not an expert on it, but that's my understanding of what was done.