Score: 2

LexGenius: An Expert-Level Benchmark for Large Language Models in Legal General Intelligence

Published: December 4, 2025 | arXiv ID: 2512.04578v1

By: Wenjin Liu , Haoran Luo , Xin Feng and more

Potential Business Impact:

Tests if AI understands and reasons like a lawyer.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Legal general intelligence (GI) refers to artificial intelligence (AI) that encompasses legal understanding, reasoning, and decision-making, simulating the expertise of legal experts across domains. However, existing benchmarks are result-oriented and fail to systematically evaluate the legal intelligence of large language models (LLMs), hindering the development of legal GI. To address this, we propose LexGenius, an expert-level Chinese legal benchmark for evaluating legal GI in LLMs. It follows a Dimension-Task-Ability framework, covering seven dimensions, eleven tasks, and twenty abilities. We use the recent legal cases and exam questions to create multiple-choice questions with a combination of manual and LLM reviews to reduce data leakage risks, ensuring accuracy and reliability through multiple rounds of checks. We evaluate 12 state-of-the-art LLMs using LexGenius and conduct an in-depth analysis. We find significant disparities across legal intelligence abilities for LLMs, with even the best LLMs lagging behind human legal professionals. We believe LexGenius can assess the legal intelligence abilities of LLMs and enhance legal GI development. Our project is available at https://github.com/QwenQKing/LexGenius.

Repos / Data Links

Page Count
22 pages

Category
Computer Science:
Computation and Language