logoalt Hacker News

LLM-as-a-Courtroom

58 pointsby jmtullossyesterday at 6:32 PM25 commentsview on HN

Comments

test6554yesterday at 11:41 PM

Defence attourney: "Judge, I object"

Judge: "On what grounds?"

Defence attourney: "On whichever grounds you find most compelling"

Judge: "I have sustained your objection based on speculation..."

show 2 replies
aryamanagrawyesterday at 10:14 PM

We kept asking LLMs to rate things on 1-10 scales and getting inconsistent results. Turns out they're much better at arguing positions than assigning numbers— which makes sense given their training data. The courtroom structure (prosecution, defense, jury, judge) gave us adversarial checks we couldn't get from a single prompt. Curious if anyone has experimented with other domain-specific frameworks to scaffold LLM reasoning.

show 3 replies
nader24today at 2:23 AM

This is a fascinating architecture, but I’m wondering about the cost and latency profile per PR. Running a Prosecutor, Defense, 5 Jurors, and a Judge for every merged PR seems like a massive token overhead compared to a standard RAG check.

jpollockyesterday at 11:33 PM

Is the llm an expensive way to solve this? Would a more predictive model type be better? Then the llm summarizes the PR and the model predicts the likelihood of needing to update the doc?

Does using a llm help avoid the cost of training a more specific model?

emsignyesterday at 11:54 PM

An LLM does not understand what "user harm" is. This doesn't work.

show 3 replies