Score: 1

DS@GT at Touché: Large Language Models for Retrieval-Augmented Debate

Published: July 12, 2025 | arXiv ID: 2507.09090v1

By: Anthony Miyaguchi, Conor Johnston, Aaryan Potdar

Potential Business Impact:

Computers learn to argue and judge debates.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) demonstrate strong conversational abilities. In this Working Paper, we study them in the context of debating in two ways: their ability to perform in a structured debate along with a dataset of arguments to use and their ability to evaluate utterances throughout the debate. We deploy six leading publicly available models from three providers for the Retrieval-Augmented Debate and Evaluation. The evaluation is performed by measuring four key metrics: Quality, Quantity, Manner, and Relation. Throughout this task, we found that although LLMs perform well in debates when given related arguments, they tend to be verbose in responses yet consistent in evaluation. The accompanying source code for this paper is located at https://github.com/dsgt-arc/touche-2025-rad.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
Information Retrieval