Score: 1

Not ready for the bench: LLM legal interpretation is unstable and out of step with human judgments

Published: October 29, 2025 | arXiv ID: 2510.25356v1

By: Abhishek Purushothama , Junghyun Min , Brandon Waldon and more

Potential Business Impact:

AI can't reliably understand laws like people.

Business Areas:
Legal Tech Professional Services

Legal interpretation frequently involves assessing how a legal text, as understood by an 'ordinary' speaker of the language, applies to the set of facts characterizing a legal dispute in the U.S. judicial system. Recent scholarship has proposed that legal practitioners add large language models (LLMs) to their interpretive toolkit. This work offers an empirical argument against LLM interpretation as recently practiced by legal scholars and federal judges. Our investigation in English shows that models do not provide stable interpretive judgments: varying the question format can lead the model to wildly different conclusions. Moreover, the models show weak to moderate correlation with human judgment, with large variance across model and question variant, suggesting that it is dangerous to give much credence to the conclusions produced by generative AI.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Computation and Language