I've been working on something very similar as a tool for my own AI research -- though I don't have the success they claim. Mine often plateaus on the optimization metric. I think there's secret sauce in the meta-prompting and meta-heuristic comments from the paper that are quite vague, but it makes sense -- it changes the dynamics of the search space and helps the LLM get out of ruts. I'm now going to try to integrate some ideas based off of my interpretation of their work to see how it goes.
If it goes well, I could open source it.
What are the things you would want to optimize with such a framework? (So far I've been focusing on optimizing ML training and architecture search itself). Hearing other ideas would help motivate me to open source if there's real demand for something like this.
Also definitely interested in open-source ml search: there are so many new approaches (I follow this channel for innovations; it is overwhelming https://www.youtube.com/@code4AI) and it would be great being able to define a use case and having a search come up with the best approaches.
I work in the field of medical image processing. I haven't thought particularly hard about it, but I'm sure I could find a ton of use cases if I wanted to.
This does seem similar to what has been done in the neural architecture search domain, doesn't it?
In my case, I'd mainly be interested in mathematics: I'd provide a mathematical problem and a baseline algorithm for it and would want an open source framework to be able to improve on that.