Score: 0

How to Trick Your AI TA: A Systematic Study of Academic Jailbreaking in LLM Code Evaluation

Published: December 11, 2025 | arXiv ID: 2512.10415v1

By: Devanshu Sahoo , Vasudev Majhi , Arjun Neekhra and more

Potential Business Impact:

Students trick AI grading code to get better grades.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The use of Large Language Models (LLMs) as automatic judges for code evaluation is becoming increasingly prevalent in academic environments. But their reliability can be compromised by students who may employ adversarial prompting strategies in order to induce misgrading and secure undeserved academic advantages. In this paper, we present the first large-scale study of jailbreaking LLM-based automated code evaluators in academic context. Our contributions are: (i) We systematically adapt 20+ jailbreaking strategies for jailbreaking AI code evaluators in the academic context, defining a new class of attacks termed academic jailbreaking. (ii) We release a poisoned dataset of 25K adversarial student submissions, specifically designed for the academic code-evaluation setting, sourced from diverse real-world coursework and paired with rubrics and human-graded references, and (iii) In order to capture the multidimensional impact of academic jailbreaking, we systematically adapt and define three jailbreaking metrics (Jailbreak Success Rate, Score Inflation, and Harmfulness). (iv) We comprehensively evalulate the academic jailbreaking attacks using six LLMs. We find that these models exhibit significant vulnerability, particularly to persuasive and role-play-based attacks (up to 97% JSR). Our adversarial dataset and benchmark suite lay the groundwork for next-generation robust LLM-based evaluators in academic code assessment.

Country of Origin
🇮🇳 India

Page Count
15 pages

Category
Computer Science:
Software Engineering