Similarly law professor Rob Anderson joked on X that llm hallucinated cases are good law:
https://x.com/ProfRobAnderson/status/2019078989348774129
> Indeed hallucinated cases are "better law." Drawing on Ronald Dworkin's theory of law as integrity, which posits that ideal legal decisions must "fit" existing precedents while advancing principled justice, this article argues that these hallucinations represent emergent normative ideals. AI models, trained on vast corpora of real case law, synthesize patterns to produce rulings that optimally align with underlying legal principles, filling gaps in the doctrinal landscape. Rather than errors, they embody the "cases that should exist," reflecting a Hercules-like judge's holistic interpretation.
Seems naive. You can get an LLM to agree with almost anything if you say the right things to it, and it will hallucinate citations to back you up without skipping a beat. You can probably get it to hallucinate case law to legalize murder on Mondays.