Identifying Legal Holdings with LLMs: A Systematic Study of Performance, Scale, and Memorization
By: Chuck Arvin
Potential Business Impact:
Helps computers understand legal cases without memorizing them.
As large language models (LLMs) continue to advance in capabilities, it is essential to assess how they perform on established benchmarks. In this study, we present a suite of experiments to assess the performance of modern LLMs (ranging from 3B to 90B+ parameters) on CaseHOLD, a legal benchmark dataset for identifying case holdings. Our experiments demonstrate scaling effects - performance on this task improves with model size, with more capable models like GPT4o and AmazonNovaPro achieving macro F1 scores of 0.744 and 0.720 respectively. These scores are competitive with the best published results on this dataset, and do not require any technically sophisticated model training, fine-tuning or few-shot prompting. To ensure that these strong results are not due to memorization of judicial opinions contained in the training data, we develop and utilize a novel citation anonymization test that preserves semantic meaning while ensuring case names and citations are fictitious. Models maintain strong performance under these conditions (macro F1 of 0.728), suggesting the performance is not due to rote memorization. These findings demonstrate both the promise and current limitations of LLMs for legal tasks with important implications for the development and measurement of automated legal analytics and legal benchmarks.
Similar Papers
Highlighting Case Studies in LLM Literature Review of Interdisciplinary System Science
Computation and Language
Helps scientists find answers in research papers faster.
Large-Language Memorization During the Classification of United States Supreme Court Cases
Computation and Language
Helps computers remember legal cases better.
Improving the Accuracy and Efficiency of Legal Document Tagging with Large Language Models and Instruction Prompts
Computation and Language
Helps lawyers sort legal papers faster.