logoalt Hacker News

mmargenotyesterday at 4:47 PM1 replyview on HN

It is more common now to improve models in agentic systems "in the loop" with reinforcement learning. Anthropic is [very likely] doing this in the backend to systematically improve the performance of their models specifically with their tools. I've done this with Goose at Block with more classic post-training approaches because it was before RL really hit the mainstream as an approach for this.

If you want to look at some of the tooling and process for this, check out verifiers (https://github.com/PrimeIntellect-ai/verifiers), hermes (https://github.com/nousresearch/hermes-agent) and accompanying trace datasets (https://huggingface.co/datasets/kai-os/carnice-glm5-hermes-t...), and other open source tools and harnesses.


Replies

mmargenotyesterday at 8:36 PM

Here’s an explicit example of the above from today using the above dataset: https://x.com/kaiostephens/status/2040396678176362540?s=46