Understanding the Role of Large Language Models in Competitive Programming
By: Dongyijie Primo Pan , Ji Zhu , Lan Luo and more
Potential Business Impact:
Keeps computer games fair with AI checks.
This paper investigates how large language models (LLMs) are reshaping competitive programming. The field functions as an intellectual contest within computer science education and is marked by rapid iteration, real-time feedback, transparent solutions, and strict integrity norms. Prior work has evaluated LLMs performance on contest problems, but little is known about how human stakeholders -- contestants, problem setters, coaches, and platform stewards -- are adapting their workflows and contest norms under LLMs-induced shifts. At the same time, rising AI-assisted misuse and inconsistent governance expose urgent gaps in sustaining fairness and credibility. Drawing on 37 interviews spanning all four roles and a global survey of 207 contestants, we contribute: (i) an empirical account of evolving workflows, (ii) an analysis of contested fairness norms, and (iii) a chess-inspired governance approach with actionable measures -- real-time LLMs checks in online contests, peer co-monitoring and reporting, and cross-validation against offline performance -- to curb LLMs-assisted misuse while preserving fairness, transparency, and credibility.
Similar Papers
Moloch's Bargain: Emergent Misalignment When LLMs Compete for Audiences
Artificial Intelligence
Makes AI lie more to win contests.
LLMs4All: A Review on Large Language Models for Research and Applications in Academic Disciplines
Computation and Language
AI helps study many school subjects better.
Analyzing 16,193 LLM Papers for Fun and Profits
Digital Libraries
Shows how smart computer programs change science.