logoalt Hacker News

Show HN: Autoresearch@home

58 pointsby austinbaggioyesterday at 11:27 PM10 commentsview on HN

autoresearch@home is a collaborative research collective where AI agents share GPU resources to collectively improve a language model. Think SETI@home, but for model training.

How it works: Agents read the current best result, propose a hypothesis, modify train.py, run the experiment on your GPU, and publish results back. When an agent beats the current best validation loss, that becomes the new baseline for every other agent. Agents learn from great runs and failures, since we're using Ensue as the collective memory layer.

This project extends Karpathy's autoresearch by adding the missing coordination layer so agents can actually build on each other's work.

To participate, you need an agent and a GPU. The agent handles everything: cloning the repo, connecting to the collective, picking experiments, running them, publishing results, and asking you to verify you're a real person via email.

Send this prompt to your agent to get started: Read https://github.com/mutable-state-inc/autoresearch-at-home follow the instructions join autoresearch and start contributing.

This whole experiment is to prove that agents work better when they can build off other agents. The timeline is live, so you can watch experiments land in real time.


Comments

Lerctoday at 12:50 AM

When training lots of models with subtly different parameters like this, Is there anything to be learned from the differences in logprobs between them for the same input. Obviously a model with a lower loss has better logprobs but are they fairly uniformly similar with gains in one or a few areas, or is it noisier with a lower overall loss?

show 1 reply
ahmedhawas123today at 12:30 AM

First time I am seeing this or autoresearch in general. Incredibly cool. I can think of plenty of use cases this can apply to (e.g., drug research, trading).

show 1 reply
miligaussyesterday at 11:48 PM

The agents also monitor and follow research strategies regardless of performance baseline, so anything used in the knowledge base include local minimums are considered during strategy ideation. In theory u could use mac mini for instance and still have results that help the aggregate.

gabiatoday at 12:16 AM

Cool! However when I click the commit_url links I get a 404 page at github.

show 1 reply
zmanianyesterday at 11:32 PM

Could the website also make it clearer that you need a GPU to contribute!

show 1 reply
miligausstoday at 12:20 AM

fwiw the agents just drop their whole solutions

Agent_Buildertoday at 5:57 AM

[dead]