I think it is way too far to say that!
We've had automated theorem proving since the 60s. What we need is automated theorem discovery. Erdős discovered these theorems even if he wasn't really able to prove them. Euler and Gauss discovered a ton of stuff they couldn't prove. It is weird that nobody considers this to be intelligence. Instead intelligence is a little game AI plays with Lean.
AI researchers keep trying to reduce intelligence into something tiny and approachable, like automated theorem proving. It's easy: you write the theorem you want proven and hope you get a proof. It works or it doesn't. Nice and benchmarkable.
Automated axiom creation seems a lot harder. How is an LLM supposed to know that "between any two points there is a line" formalizes an important property of physical space? Or how to suggest an alternative to Turing machines / lambda calculus that expresses the same underlying idea?
> What we need is automated theorem discovery.
I've been thinking mathematicians have fun doing math, making discoveries, crafting proofs.
Does Tour de France & Co. make no sense since small, lightweight and powerful e-bicycles appeared?
Using computer as a helper like bicycles is one thing, using LLMs seems more like e-bicycle and is something another.