LegalRikai: Open Benchmark -- A Benchmark for Complex Japanese Corporate Legal Tasks
By: Shogo Fujita , Yuji Naraki , Yiqing Zhu and more
Potential Business Impact:
Helps lawyers check legal documents faster.
This paper introduces LegalRikai: Open Benchmark, a new benchmark comprising four complex tasks that emulate Japanese corporate legal practices. The benchmark was created by legal professionals under the supervision of an attorney. This benchmark has 100 samples that require long-form, structured outputs, and we evaluated them against multiple practical criteria. We conducted both human and automated evaluations using leading LLMs, including GPT-5, Gemini 2.5 Pro, and Claude Opus 4.1. Our human evaluation revealed that abstract instructions prompted unnecessary modifications, highlighting model weaknesses in document-level editing that were missed by conventional short-text tasks. Furthermore, our analysis reveals that automated evaluation aligns well with human judgment on criteria with clear linguistic grounding, and assessing structural consistency remains a challenge. The result demonstrates the utility of automated evaluation as a screening tool when expert availability is limited. We propose a dataset evaluation framework to promote more practice-oriented research in the legal domain.
Similar Papers
Automatic Legal Writing Evaluation of LLMs
Computation and Language
Helps AI judge legal writing like a lawyer.
PRBench: Large-Scale Expert Rubrics for Evaluating High-Stakes Professional Reasoning
Computation and Language
Tests AI on real-world law and money problems.
JBE-QA: Japanese Bar Exam QA Dataset for Assessing Legal Domain Knowledge
Computation and Language
Tests if computers understand Japanese law.