Evaluating Legal Reasoning Traces with Legal Issue Tree Rubrics
By: Jinu Lee , Kyoung-Woon On , Simeng Han and more
Potential Business Impact:
Helps AI understand and explain legal arguments better.
Evaluating the quality of LLM-generated reasoning traces in expert domains (e.g., law) is essential for ensuring credibility and explainability, yet remains challenging due to the inherent complexity of such reasoning tasks. We introduce LEGIT (LEGal Issue Trees), a novel large-scale (24K instances) expert-level legal reasoning dataset with an emphasis on reasoning trace evaluation. We convert court judgments into hierarchical trees of opposing parties' arguments and the court's conclusions, which serve as rubrics for evaluating the issue coverage and correctness of the reasoning traces. We verify the reliability of these rubrics via human expert annotations and comparison with coarse, less informative rubrics. Using the LEGIT dataset, we show that (1) LLMs' legal reasoning ability is seriously affected by both legal issue coverage and correctness, and that (2) retrieval-augmented generation (RAG) and RL with rubrics bring complementary benefits for legal reasoning abilities, where RAG improves overall reasoning capability, whereas RL improves correctness albeit with reduced coverage.
Similar Papers
Judicial Requirements for Generative AI in Legal Reasoning
Artificial Intelligence
Helps AI understand and argue legal cases.
A Law Reasoning Benchmark for LLM with Tree-Organized Structures including Factum Probandum, Evidence and Experiences
Artificial Intelligence
Helps judges make fair decisions by showing how they think.
Towards Trustworthy Legal AI through LLM Agents and Formal Reasoning
Artificial Intelligence
Makes AI judge cases fairly and explain why.