logoalt Hacker News

hmottestad01/21/20251 replyview on HN

That gut feeling approach is very human like. You have a bias and even when the facts say that you are wrong you think that there must be a mistake, because your original bias is so strong.

Maybe we need a dozen LLMs with different biases. Let them try to convince the main reasoning LLM that it’s wrong in various ways.

Or just have an LLM that is trained on some kind of critical thinking dataset where instead of focusing on facts it focuses on identifying assumptions.


Replies

kridsdale101/21/2025

That would be a true Mixture of Experts.

I sometimes put the 4 biggest models like this to converge on an optimal solution