DS@GT at Touché: Large Language Models for Retrieval-Augmented Debate
By: Anthony Miyaguchi, Conor Johnston, Aaryan Potdar
Potential Business Impact:
Computers learn to argue and judge debates.
Large Language Models (LLMs) demonstrate strong conversational abilities. In this Working Paper, we study them in the context of debating in two ways: their ability to perform in a structured debate along with a dataset of arguments to use and their ability to evaluate utterances throughout the debate. We deploy six leading publicly available models from three providers for the Retrieval-Augmented Debate and Evaluation. The evaluation is performed by measuring four key metrics: Quality, Quantity, Manner, and Relation. Throughout this task, we found that although LLMs perform well in debates when given related arguments, they tend to be verbose in responses yet consistent in evaluation. The accompanying source code for this paper is located at https://github.com/dsgt-arc/touche-2025-rad.
Similar Papers
Can LLMs Judge Debates? Evaluating Non-Linear Reasoning via Argumentation Theory Semantics
Computation and Language
Helps computers understand arguments in debates.
The Social Laboratory: A Psychometric Framework for Multi-Agent LLM Evaluation
Artificial Intelligence
AI agents learn to agree and persuade each other.
Beyond Single Models: Enhancing LLM Detection of Ambiguity in Requests through Debate
Computation and Language
Makes AI understand confusing requests better.