logoalt Hacker News

anabistoday at 1:10 AM5 repliesview on HN

AI should decide the level of model needed, and fallback if it fails. It mostly is a UX problem. Why do I need to specify the level of model beforehand? Many problems don't allow decision pre-implementation.


Replies

YmiYugytoday at 9:01 AM

Because judging failure is itself a complex task requiring a potentially expensive model.

jeremyjhtoday at 2:04 AM

This is the approach of Auto in Cursor and I've not been impressed with it at all. I think I'm always getting Composer and while its fast it wastes my time. GLM 5.1 in OpenCode is far better and less expensive, it can do planning and implementation both very effectively. Opus is still the best but GPT 5.4 (in Codex) is good enough too, and way more affordable.

Vegenoidtoday at 3:18 AM

This would require LLMs being good at knowing when they are doing a bad job, which they are still terrible at. With a good testing and verification harness set up, sure, then it could just go to a more powerful model if it can't make tests pass. But not a lot of usage is like this.

koonsolotoday at 8:59 AM

At the current cost, I just use the best model all the time. Why wouldn't I?

timrtoday at 1:36 AM

That’s certainly an opinion. Not one I agree with, but sure, if you entirely outsource all of your thinking to the magic box, then you probably want the box to have the strongest possible magic.