logoalt Hacker News

Autoresearch: Agents researching on single-GPU nanochat training automatically

32 pointsby simonpureyesterday at 8:22 PM11 commentsview on HN

Comments

oezitoday at 1:28 AM

Is there a Autoresearch for Jupyter somewhere? I point it to a Jupyter cell to improve based on another which calculates the target metric?

mikert89today at 1:10 AM

As ai improves, most tasks will become something like this. Environments setup where the model learns through trial and error

Any human endeavor that can be objectively verified in some environment like this can be completely automated

abepputoday at 12:28 AM

but the experiments it did that "improved" validation BPB in the GH screenshot were all basically hyperparameter changes right? So is this better or worse, either per experiment or per unit time, than hyperparameter tuning techniques that don't involve an LLM? It's not clear from this if the LLM is more or less making random changes which sometimes work , and or the LLM thinking actually finds "good" changes because of what the LLM has internalized. E.g. how does this compare to a hyperparameter tuning pass with e.g. BayesOpt that does the same number of 5-min training experiments?

show 1 reply
falcor84yesterday at 10:52 PM

The only thing missing is for the agents to publish and peer-review their research.

show 1 reply
AlexCoventryyesterday at 11:05 PM

Wow, Gemini suggested a very similar experiment to me yesterday. Guess I know where it got the idea from, now. :-)

kubbtoday at 12:27 AM

He's burning Claude tokens to slightly improve his tiny and not very capable LLM? It's fun, I bet, but wake me up when it leads to a research breakthrough.

show 1 reply
lostmsuyesterday at 11:55 PM

Non-zero based chart makes it look like it was very successful.

aplomb1026today at 12:32 AM

[dead]

decker_devyesterday at 11:15 PM

[dead]