logoalt Hacker News

omnicognatelast Monday at 8:37 PM2 repliesview on HN

That was an impressive result, but AIUI not an example of "coming up with and proving (preferably numerous) significant new theorems without human guidance".

For one thing, the output was an algorithm, not a theorem (except in the Curry-Howard sense). More importantly though, AlphaEvolve has to be given an objective function to evaluate the algorithms it generates, so it can't be considered to be working "without human guidance". It only uses LLMs for the mutation step, generating new candidate algorithms. Its outer loop is a an optimisation process capable only of evaluating candidates according to the objective function. It's not going to spontaneously decide to tackle the Langlands program.

Correct me if I'm wrong about any of the above. I'm not an expert on it, but that's my understanding of what was done.


Replies

OrderlyTiamatyesterday at 6:11 AM

I'll concede to all your points here, but I was nevertheless extremely impressed by this result.

You're right of course that this was not without human guidance but to me even successfully using LLMs just for the mutation step was in and of itself surprising enough that it revised my own certainty that llms absolutely cannot think.

I see this more like a step in the direction of what you're looking for, not as a counter example.

pegasuslast Monday at 8:58 PM

Yes, it's a very technical and circumscribed result, not requiring a deep insight into the nature of various mathematical models.