logoalt Hacker News

overflow89705/15/20250 repliesview on HN

I believe we're already using llms to evaluate llm output for training, I wonder if there's some variation of that which could be used to identify when one llm gets "stuck".

I guess chain of thought in theory should do that but having variations on prompt and context might behave differently?