logoalt Hacker News

qwertoxtoday at 12:30 AM2 repliesview on HN

> If LLMs give only one answer, no matter what nuances are at play, that sounds like they are failing to judge and instead are diminishing the thought process down to black-and-white thinking.

You can have a team of agents exchange views and maybe the protocol would even allow for settling the cases automatically. The more agents you have, the higher the nuances.


Replies

jagged-chiseltoday at 12:55 AM

Presumably all these agents would have been trained on different data, with different viewpoints? Otherwise, what makes them different enough from each other that such a "conversation" would matter?

show 1 reply
viraptortoday at 1:46 AM

Then you'd need to provide them with access to the law, previous cases, to the news, to various data sources. And you'd have to decide how much each of those sources of information matter. And at that point, you've got people making the decision again instead of the ai in practice.

And then there's the question of the model used. Turns out I've got preferences for which model I'd rather be judged by, and it's not Grok for example...