logoalt Hacker News

viksityesterday at 3:58 AM1 replyview on HN

great point, appreciate the comment. totally agree with your framing, though i think there’s still a gap in how tool use is handled today.

quick note: it doesn’t have to be an rnn. i’ve got a follow-up example coming that uses a transformer-style ToolController with self attention, more expressive routing, etc.

but here’s the thing — when you rely on few-shot bootstrapping the LLM, you never end up updating the model's priors. even after 100k tool calls, you’re still stuck in the same polluted context window and its all stateless.

this gets worse fast with more than 3–4 tool calls, especially when there’s branching logic (e.g., if api1 > 5, go left, else right).

what this approach offers is: backprop through tool calls. you can tune prompts and update priors across the full workflow, end to end. trying to develop this intuition a bit more, and would love feedback.

thanks for the suggestion on the eval — will post that comparison soon.


Replies

rybosomeyesterday at 3:20 PM

That’s cool, I’d love to see the advanced ToolController when it’s available!

Great points about not updating priors. I also thought about it a bit more and realized that there’s a way you can largely mitigate the out-of-distribution inference requests after local tool selection, if you wanted to.

The tool use loop in an inference framework builds up history of each interaction and sends that along with each subsequent request. You could create “synthetic history”, where you send the LLM history containing the prompt, your local tool selection masquerading as though the LLM generated it, and the tool response. This would be in-distribution but still rely on your local tool routing.

If this works well enough, then I think your approach is very powerful once you’ve decided on a task and set of tools and are able to commit to training on that. Definitely want to try this myself now.

Looking forward to seeing more! I take it your substack is the best place to follow along?