logoalt Hacker News

mikestorrentyesterday at 10:38 PM1 replyview on HN

There's no reason it has to be that. Imagine e.g. taking an agent and a lesser-known but technically-superior language stack - say you're an SBCL fan. You find that the LLM is less useful because it hasn't been trained on 1000000 Stack Overflow posts about Lisp and so it can't reason as well as it can about Python.

So, you set up a long running agent team and give it the job of building up a very complete and complex set of examples and documentation with in-depth tests etc. that produce various kinds of applications and systems using SBCL, write books on the topic, etc.

It might take a long time and a lot of tokens, but it would be possible to build a synthetic ecosystem of true, useful information that has been agentically determined through trial and error experiments. This is then suitable training data for a new LLM. This would actually advance the state of the art; not in terms of "what SBCL can do" but rather in terms of "what LLMs can directly reason about with regard to SBCL without needing to consume documentation".

I imagine this same approach would work fine for any other area of scientific advancement; as long as experimentation is in the loop. It's easier in computer science because the experiment can be run directly by the agent, but there's no reason it can't farm experiments out to lab co-op students somewhere when working in a different discipline.


Replies

ls612today at 4:17 AM

This works for code because there is an external verification step. The agent has to run code on the machine and observe the results. This is very easy for software since LLMs are software and can just invoke other software, it becomes much harder for many other scientific fields.