> This problem is about improving lower bounds on the values of a sequence, , that arises in the study of simultaneous convergence of sets of infinite series, defined as follows.
One thing I notice in the AlphaEvolve paper as well as here, is that these LLMs have been shown to solve optimization problems - something we have been using computers for, for really long. In fact, I think the alphaevolve-style prompt augmentation approach is a more principled approach to what these guys have done here, and am fairly confident this one would have been solved in that approach as well.
In spirit, the LLM seems to compute the {meta-, }optimization step()s in activation space. Or, it is retrieving candidate proposals.
It would be interesting to see if we can extract or model the exact algorithms from the activations. Or, it is simply retrieving and proposing a deductive closures of said computation.
In the latter case, it would mean that LLMs alone can never "reason" and you need an external planner+verifier (alpha-evolve style evolutionary planner for example).
We are still looking for proof of the former behaviour.