We kept asking LLMs to rate things on 1-10 scales and getting inconsistent results. Turns out they're much better at arguing positions than assigning numbers— which makes sense given their training data. The courtroom structure (prosecution, defense, jury, judge) gave us adversarial checks we couldn't get from a single prompt. Curious if anyone has experimented with other domain-specific frameworks to scaffold LLM reasoning.
This is a fascinating architecture, but I’m wondering about the cost and latency profile per PR. Running a Prosecutor, Defense, 5 Jurors, and a Judge for every merged PR seems like a massive token overhead compared to a standard RAG check.
Is the llm an expensive way to solve this? Would a more predictive model type be better? Then the llm summarizes the PR and the model predicts the likelihood of needing to update the doc?
Does using a llm help avoid the cost of training a more specific model?
An LLM does not understand what "user harm" is. This doesn't work.
Defence attourney: "Judge, I object"
Judge: "On what grounds?"
Defence attourney: "On whichever grounds you find most compelling"
Judge: "I have sustained your objection based on speculation..."