SynLexLM: Scaling Legal LLMs with Synthetic Data and Curriculum Learning
By: Ojasw Upadhyay, Abishek Saravanakumar, Ayman Ismail
Potential Business Impact:
Helps lawyers understand legal papers faster.
Large Language Models (LLMs) are powerful but often require extensive fine-tuning and large datasets for specialized domains like law. General-purpose pre-training may not capture legal nuances, and acquiring sufficient legal data is challenging. We introduce SynLexLM, a novel approach to efficiently pre-train a legal LLM. Our method employs curriculum learning, progressing from simple to complex legal texts and queries, combined with synthetic data augmentation using models like Gemini Pro to address data scarcity. We aim to achieve improved performance on legal benchmarks (BigLaw-Bench, EUR-Lex-Sum) compared to traditional models and fine-tuned versions. Preliminary work involves generating synthetic QA pairs reflecting legal reasoning. This work aims to enhance legal document analysis and research tools, potentially democratizing access to advanced legal AI.
Similar Papers
Scaling Laws of Synthetic Data for Language Models
Computation and Language
Creates endless smart computer learning material.
Improving the Accuracy and Efficiency of Legal Document Tagging with Large Language Models and Instruction Prompts
Computation and Language
Helps lawyers sort legal papers faster.
Large Language Models Meet Legal Artificial Intelligence: A Survey
Computation and Language
Helps lawyers use smart computers for legal work.