Classifying long legal documents using short random chunks
By: Luis Adrián Cabrera-Diego
Classifying legal documents is a challenge, besides their specialized vocabulary, sometimes they can be very long. This means that feeding full documents to a Transformers-based models for classification might be impossible, expensive or slow. Thus, we present a legal document classifier based on DeBERTa V3 and a LSTM, that uses as input a collection of 48 randomly-selected short chunks (max 128 tokens). Besides, we present its deployment pipeline using Temporal, a durable execution solution, which allow us to have a reliable and robust processing workflow. The best model had a weighted F-score of 0.898, while the pipeline running on CPU had a processing median time of 498 seconds per 100 files.
Similar Papers
Beyond Token Limits: Assessing Language Model Performance on Long Text Classification
Computation and Language
Helps computers understand very long texts, like laws.
Beyond Token Limits: Assessing Language Model Performance on Long Text Classification
Computation and Language
Helps computers understand very long texts, like laws.
Scaling Legal AI: Benchmarking Mamba and Transformers for Statutory Classification and Case Law Retrieval
Computers and Society
Helps computers understand long legal texts faster.