Does autoresearch work for projects that are not llm based? Eg in karpathy's example he is optimizing the nanogpt. What if I wanted to improve a Unet for image segmentation?
Yes, that's the real strenght of it. The structure is dead simple so you just have to switch the goal metric.
I used it on a data science project to find the best rules for achieving a defined outcome. At first, for fun, then I actually used some of its insights (and it caught a sampling issue I overlooked, oops)
The gist of these things is you point them at an eval metric and say 'make it go better.' so, you can point it at anything you can measure. The example in the blog post here is bonding boxes on wood cut images.
I used it to speed up an codecompass-like repo from 86 files per second to 2000. Still haven't used the repo in production, so maybe it secretly broke things, but the ability to say: "optimize this benchmark and commit only if you pass these tests" is nice
I think image segmentation is in the same class as LLMs - ML experiments.
What about more distant software projects? Give it the CPython source code and say you want it to be faster.
Tobi from Shopify used a variant of autoresearch to optimize the Liquid template engine, and found a 53% speedup after ~120 experiments: https://github.com/Shopify/liquid/pull/2056
I wrote up some more notes on that here: https://simonwillison.net/2026/Mar/13/liquid/