I think maybe there are subsets of problems where you can have either a human or a smart LLM write a verifier (e.g. a property-based test?) and a performance measurement and let the dumb models generate candidates iterate on candidates?
Yeah, maybe, but then it would make much more sense to run a big model than hope one of the small ones randomly stumbles upon the solution, just because the possibility space is so much larger than the number of dumb LLMs you can run.
Yeah, maybe, but then it would make much more sense to run a big model than hope one of the small ones randomly stumbles upon the solution, just because the possibility space is so much larger than the number of dumb LLMs you can run.