> I wonder if there is a more general solution that can make models spend more compute on making important choices
There's a lot of work going on in various streams towards making it possible to vary compute per-token, dynamically, e.g. universal transformers. Maybe one day it'll work well enough to beat conventional techniques.