After reading another post about the most recent advances LLMs have made in finding and writing up novel, correct proofs, it sounds like the frontier models are now at the point of PhD student level. I wonder how a math student could contribute today, if they're just starting on the PhD track? Maybe by using LLMs as a mighty tool and providing skilled usage and oversight?
It must feel similar to those who wanted to become chess or go masters after computers surpassed humanity in those games.
LLM models can only predict the next token.
The can't predict the consequences of an action predicting one token after another. They can't solve a Rubik's Cube unlike a 7 year old human who can learn to do it in a weekend. They can't imagine the perspective of being a human being unlike a 7 year old human if asked to imagine they where in the position of another human.
I wonder if AI is one means to overcome the natural limits of human knowledge aggregation [0].
On the other hand, in the very long run, what does it mean if a talented human being does not have enough years of life to fully analyze and understand an extremely advanced proof created by AI?
[0]: https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/
The Mathoverflow question was asked 15 years ago. The top answer says that the human community part is very important and spreading knowledge an critical thinking is valuable.
The most recent advances are stunts by a handful of famous prompters who are funded in various ways by the LLM industrial complex.
How many theorems are proven by mathematicians each year? Let's guess 10000. Then the Erdos toy proofs with unknown token and resource usage are less than 1%.
If your motivation is being recognized as the best of the best, winning the competition, yes it’s probably a bleak world. But if you motivation is improving your own capabilities, with the metric being if you’re better know then you were last month, then it’s not a bleak world, there are many more tools available to help you learn and improve now then there were in the past.
> After reading another post about the most recent advances LLMs have made in finding and writing up novel, correct proofs, it sounds like the frontier models are now at the point of PhD student level.
This is somewhat misleading, the LLMs' contributions are in a limited niche of highly technical problem solving. They're neat but they're not the first mathematical theorem that gets automatically solved by a computer, that was done already in the 1990s.
> Maybe by using LLMs as a mighty tool and providing skilled usage and oversight?
Yes, even in the areas where LLMs are at their best, we'll still need a lot of human effort to make the results cleanly understandable. LLMs cannot do this well, even their generated papers have to be rewritten by human experts to surface the important bits.