A Training-Free Large Reasoning Model-based Knowledge Tracing Framework for Unified Prediction and Prescription
By: Unggi Lee , Joo Young Kim , Ran Ju and more
Potential Business Impact:
Helps computers teach students better, faster, and with feedback.
Knowledge Tracing (KT) aims to estimate a learner's evolving mastery based on interaction histories. Recent studies have explored Large Language Models (LLMs) for KT via autoregressive nature, but such approaches typically require fine-tuning and exhibit unstable or near-random performance. Moreover, prior KT systems primarily focus on prediction and rely on multi-stage pipelines for feedback and recommendation, resulting in increased system complexity and resources. To address this gap, we propose Thinking-KT, a training-free KT framework that incorporates Test-Time Scaling (TTS), enabling even small LLMs to achieve competitive KT performance. Moreover, in this framework, a small LLM can jointly perform KT prediction, personalized feedback generation, and learning recommendation in a unified output without degrading prediction accuracy. Beyond performance, we present the systematic analysis of reasoning traces in KT. Our results demonstrate that TTS is a critical yet underexplored factor in LLM-based KT, and that small LLMs can serve as unified ITS engines.
Similar Papers
LLM-KT: Aligning Large Language Models with Knowledge Tracing using a Plug-and-Play Instruction
Computation and Language
Helps computers guess if students will answer questions right.
TRAIL: Joint Inference and Refinement of Knowledge Graphs with Large Language Models
Information Retrieval
Helps computers learn and remember new facts.
KG-TRACES: Enhancing Large Language Models with Knowledge Graph-constrained Trajectory Reasoning and Attribution Supervision
Computation and Language
Makes AI explain how it gets answers.