I tend to wonder if stuff like this is an informative boundary on AI capabilities. I mean, you can't ask a LLM today to do that (AFAICT). "Here's a simply-specified but extremely broad search space, solve this problem in it" isn't something that fits the model. But it's a relatively common (if not "easy") task human beings like to show off.
What needs to change to enable this kind of exploration?
Actually, the (in)famous "sparks of general intelligence paper" about GPT-4 included tasks such as "Draw a unicorn in TikZ" which really is not that far off from this task. There were also examples for drawing cars/trucks/cats etc with SVG.
But I do think that evolutionary algorithms or MCMC variants could do a better job of this, especially if paired with an auxiliary model for scoring their intermediate results.
I was thinking it could, actually, given a feedback loop. The tool use would a json that takes 13 circles, each with x, y position, radius, and whether it's filled in or empty, and output an image. It could look at the image and iterate.
Is it impossible, in this day and age, to enjoy a post without thinking about LLMs? It's like an obsession.