Not ready for the bench: LLM legal interpretation is unstable and out of step with human judgments
By: Abhishek Purushothama , Junghyun Min , Brandon Waldon and more
Potential Business Impact:
AI can't reliably understand laws like people.
Legal interpretation frequently involves assessing how a legal text, as understood by an 'ordinary' speaker of the language, applies to the set of facts characterizing a legal dispute in the U.S. judicial system. Recent scholarship has proposed that legal practitioners add large language models (LLMs) to their interpretive toolkit. This work offers an empirical argument against LLM interpretation as recently practiced by legal scholars and federal judges. Our investigation in English shows that models do not provide stable interpretive judgments: varying the question format can lead the model to wildly different conclusions. Moreover, the models show weak to moderate correlation with human judgment, with large variance across model and question variant, suggesting that it is dangerous to give much credence to the conclusions produced by generative AI.
Similar Papers
LLMs in Interpreting Legal Documents
Computation and Language
Helps lawyers understand and write legal papers faster.
LLMs in Interpreting Legal Documents
Computation and Language
Helps lawyers understand and write legal papers faster.
Neither Valid nor Reliable? Investigating the Use of LLMs as Judges
Computation and Language
Makes AI judges for writing less trustworthy.