logoalt Hacker News

caminanteblancoyesterday at 3:37 PM1 replyview on HN

The only problem is I feel like having to have Claude rewrite the prompt negates some of the efficiency and latency benefits of using mini. For system prompts obviously this doesn't matter, but for actual continuous user interaction, it feels unworkable.

It definitely makes sense that improving formatting and clarity for these smaller models would really help with performance, but I'm wondering if gpt5-mini is already smart enough to handle that reformatting, and can rewrite the prompt itself, before handing it off to another instance of itself.

Overall an awesome article!


Replies

blndrttoday at 6:10 AM

Thank you!

Great point. Indeed my methodology was to treat the prompt refactoring as one-off task, therefore I didn't care much about cost/latency.

As for having GPT-5-mini do the rewriting — that’s a really interesting idea. I think the biggest challenge is avoiding cognitive overload. The Tau² agent policies are pretty complex: it’s easy to grasp the overall task, but the detailed rules for each user case aren’t always obvious.

I'm not sure if how easy it is to actually overload GPT-5-mini, so that's definitely worth exploring.