Large-Language Memorization During the Classification of United States Supreme Court Cases
By: John E. Ortega, Dhruv D. Joshi, Matt P. Borkowski
Potential Business Impact:
Helps computers remember legal cases better.
Large-language models (LLMs) have been shown to respond in a variety of ways for classification tasks outside of question-answering. LLM responses are sometimes called "hallucinations" since the output is not what is ex pected. Memorization strategies in LLMs are being studied in detail, with the goal of understanding how LLMs respond. We perform a deep dive into a classification task based on United States Supreme Court (SCOTUS) decisions. The SCOTUS corpus is an ideal classification task to study for LLM memory accuracy because it presents significant challenges due to extensive sentence length, complex legal terminology, non-standard structure, and domain-specific vocabulary. Experimentation is performed with the latest LLM fine tuning and retrieval-based approaches, such as parameter-efficient fine-tuning, auto-modeling, and others, on two traditional category-based SCOTUS classification tasks: one with 15 labeled topics and another with 279. We show that prompt-based models with memories, such as DeepSeek, can be more robust than previous BERT-based models on both tasks scoring about 2 points better than previous models not based on prompting.
Similar Papers
Identifying Legal Holdings with LLMs: A Systematic Study of Performance, Scale, and Memorization
Computation and Language
Helps computers understand legal cases without memorizing them.
Memorization $\neq$ Understanding: Do Large Language Models Have the Ability of Scenario Cognition?
Computation and Language
Computers don't truly understand stories, they just remember them.
Highlighting Case Studies in LLM Literature Review of Interdisciplinary System Science
Computation and Language
Helps scientists find answers in research papers faster.