logoalt Hacker News

dvt04/23/20252 repliesview on HN

> Is anyone doing it this way?

I'm working on a way of invoking tools mid-tokenizer-stream, which is kind of cool. So for example, the LLM says something like (simplified example) "(lots of thinking)... 1+2=" and then there's a parser (maybe regex, maybe LR, maybe LL(1), etc.) that sees that this is a "math-y thing" and automagically goes to the CALC tool which calculates "3", sticks it in the stream, so the current head is "(lots of thinking)... 1+2=3 " and then the LLM can continue with its thought process.


Replies

namaria04/23/2025

Cold winds are blowing when people look at LLMs and think "maybe an expert system on top of that?".

show 1 reply
sanderjd04/23/2025

Definitely an interesting thought to do this at the tokenizer level!