The paper does not give that many details about the evolution part. Normally, evolutionary algorithms contain some cross-over component where solutions can breed with each other. Otherwise it's better classified as hill climbing / beam search.
The model is fed a few samplings of previous attempts and their evaluations during the optimization of the current algorithm. Using that information, the model is able to combine components of previous attempts into the current attempt at will. That is because all of this is fed into a single prompt, which the LLM can reference arbitrarily. So recombination is well represented here, bringing it closer to a genetic algorithm. In essence, it combines elements from hill climbing, beam search, and genetic algorithms by virtue of its unbounded nature as an LLM.
I fear it’s not really evolutionary algorithms in the typical sense.
One intriguing caption mentioned something requiring 16 “mutations”. I’d sure like to know how these mutations work.
There's also 'evolutionary strategy' algorithms that do not use the typical mutation and crossover, but instead use a population of candidates (search samples) to basically approximate the gradient landscape.