Parameterized Argumentation-based Reasoning Tasks for Benchmarking Generative Language Models
By: Cor Steging, Silja Renooij, Bart Verheij
Potential Business Impact:
Tests AI's thinking for fair legal decisions.
Generative large language models as tools in the legal domain have the potential to improve the justice system. However, the reasoning behavior of current generative models is brittle and poorly understood, hence cannot be responsibly applied in the domains of law and evidence. In this paper, we introduce an approach for creating benchmarks that can be used to evaluate the reasoning capabilities of generative language models. These benchmarks are dynamically varied, scalable in their complexity, and have formally unambiguous interpretations. In this study, we illustrate the approach on the basis of witness testimony, focusing on the underlying argument attack structure. We dynamically generate both linear and non-linear argument attack graphs of varying complexity and translate these into reasoning puzzles about witness testimony expressed in natural language. We show that state-of-the-art large language models often fail in these reasoning puzzles, already at low complexity. Obvious mistakes are made by the models, and their inconsistent performance indicates that their reasoning capabilities are brittle. Furthermore, at higher complexity, even state-of-the-art models specifically presented for reasoning capabilities make mistakes. We show the viability of using a parametrized benchmark with varying complexity to evaluate the reasoning capabilities of generative language models. As such, the findings contribute to a better understanding of the limitations of the reasoning capabilities of generative models, which is essential when designing responsible AI systems in the legal domain.
Similar Papers
The Ouroboros of Benchmarking: Reasoning Evaluation in an Era of Saturation
Computation and Language
Tests if smart computers can truly think.
RiddleBench: A New Generative Reasoning Benchmark for LLMs
Computation and Language
Tests AI's smart thinking, finds it struggles.
Evaluating the Logical Reasoning Abilities of Large Reasoning Models
Artificial Intelligence
Tests if computers can think logically like people.