Score: 1

CoDAE: Adapting Large Language Models for Education via Chain-of-Thought Data Augmentation

Published: August 11, 2025 | arXiv ID: 2508.08386v1

By: Shuzhou Yuan , William LaCroix , Hardik Ghoshal and more

Potential Business Impact:

AI tutors now teach better, not just give answers.

Large Language Models (LLMs) are increasingly employed as AI tutors due to their scalability and potential for personalized instruction. However, off-the-shelf LLMs often underperform in educational settings: they frequently reveal answers too readily, fail to adapt their responses to student uncertainty, and remain vulnerable to emotionally manipulative prompts. To address these challenges, we introduce CoDAE, a framework that adapts LLMs for educational use through Chain-of-Thought (CoT) data augmentation. We collect real-world dialogues between students and a ChatGPT-based tutor and enrich them using CoT prompting to promote step-by-step reasoning and pedagogically aligned guidance. Furthermore, we design targeted dialogue cases to explicitly mitigate three key limitations: over-compliance, low response adaptivity, and threat vulnerability. We fine-tune four open-source LLMs on different variants of the augmented datasets and evaluate them in simulated educational scenarios using both automatic metrics and LLM-as-a-judge assessments. Our results show that models fine-tuned with CoDAE deliver more pedagogically appropriate guidance, better support reasoning processes, and effectively resist premature answer disclosure.

Country of Origin
🇩🇪 Germany

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Computation and Language