Evaluating the Role of Verifiers in Test-Time Scaling for Legal Reasoning Tasks
By: Davide Romano, Jonathan Schwarz, Daniele Giofré
Potential Business Impact:
Helps lawyers answer questions faster and better.
Test-time scaling (TTS) techniques can improve the performance of large language models (LLMs) at the expense of additional computation and latency. While TTS has proven effective in formal domains such as mathematics and programming \citep{snell2024scaling, chen2024more}, its value in argumentative domains such as law remains underexplored. We present an empirical study of verifier-based TTS methods for legal multiple-choice QA (MCQA) across five benchmarks. Using a family of 7 reward models, we evaluate both outcome-level (Best-of-$N$) and process-level (tree search) verification under realistic low-$N$ budgets. Our analysis systematically investigates how verifier utility is affected by key properties such as domain specialization, model size, and supervision type (process-supervised PRMs vs. outcome-only ORMs), even when applied across different roles.
Similar Papers
Evaluating the Role of Verifiers in Test-Time Scaling for Legal Reasoning Tasks
Computation and Language
Helps AI understand legal questions better.
Trust but Verify! A Survey on Verification Design for Test-time Scaling
Computation and Language
Helps computers think better by checking their answers.
Variation in Verification: Understanding Verification Dynamics in Large Language Models
Computation and Language
Makes AI better at checking its own answers.