A Law Reasoning Benchmark for LLM with Tree-Organized Structures including Factum Probandum, Evidence and Experiences
By: Jiaxin Shen , Jinan Xu , Huiqi Hu and more
Potential Business Impact:
Helps judges make fair decisions by showing how they think.
While progress has been made in legal applications, law reasoning, crucial for fair adjudication, remains unexplored. We propose a transparent law reasoning schema enriched with hierarchical factum probandum, evidence, and implicit experience, enabling public scrutiny and preventing bias. Inspired by this schema, we introduce the challenging task, which takes a textual case description and outputs a hierarchical structure justifying the final decision. We also create the first crowd-sourced dataset for this task, enabling comprehensive evaluation. Simultaneously, we propose an agent framework that employs a comprehensive suite of legal analysis tools to address the challenge task. This benchmark paves the way for transparent and accountable AI-assisted law reasoning in the ``Intelligent Court''.
Similar Papers
Towards Trustworthy Legal AI through LLM Agents and Formal Reasoning
Artificial Intelligence
Makes AI judge cases fairly and explain why.
Evaluating Legal Reasoning Traces with Legal Issue Tree Rubrics
Artificial Intelligence
Helps AI understand and explain legal arguments better.
HiBench: Benchmarking LLMs Capability on Hierarchical Structure Reasoning
Computation and Language
Teaches computers to understand how things are organized.