Score: 1

Large-Language Memorization During the Classification of United States Supreme Court Cases

Published: December 15, 2025 | arXiv ID: 2512.13654v1

By: John E. Ortega, Dhruv D. Joshi, Matt P. Borkowski

Potential Business Impact:

Helps computers remember legal cases better.

Business Areas:
Semantic Search Internet Services

Large-language models (LLMs) have been shown to respond in a variety of ways for classification tasks outside of question-answering. LLM responses are sometimes called "hallucinations" since the output is not what is ex pected. Memorization strategies in LLMs are being studied in detail, with the goal of understanding how LLMs respond. We perform a deep dive into a classification task based on United States Supreme Court (SCOTUS) decisions. The SCOTUS corpus is an ideal classification task to study for LLM memory accuracy because it presents significant challenges due to extensive sentence length, complex legal terminology, non-standard structure, and domain-specific vocabulary. Experimentation is performed with the latest LLM fine tuning and retrieval-based approaches, such as parameter-efficient fine-tuning, auto-modeling, and others, on two traditional category-based SCOTUS classification tasks: one with 15 labeled topics and another with 279. We show that prompt-based models with memories, such as DeepSeek, can be more robust than previous BERT-based models on both tasks scoring about 2 points better than previous models not based on prompting.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
11 pages

Category
Computer Science:
Computation and Language