Next Token Knowledge Tracing: Exploiting Pretrained LLM Representations to Decode Student Behaviour
By: Max Norris, Kobi Gal, Sahan Bulathwela
Potential Business Impact:
Helps computers understand how students learn better.
Modelling student knowledge is a key challenge when leveraging AI in education, with major implications for personalised learning. The Knowledge Tracing (KT) task aims to predict how students will respond to educational questions in learning environments, based on their prior interactions. Existing KT models typically use response correctness along with metadata like skill tags and timestamps, often overlooking the question text, which is an important source of pedagogical insight. This omission poses a lost opportunity while limiting predictive performance. We propose Next Token Knowledge Tracing (NTKT), a novel approach that reframes KT as a next-token prediction task using pretrained Large Language Models (LLMs). NTKT represents both student histories and question content as sequences of text, allowing LLMs to learn patterns in both behaviour and language. Our series of experiments significantly improves performance over state-of-the-art neural KT models and generalises much better to cold-start questions and users. These findings highlight the importance of question content in KT and demonstrate the benefits of leveraging pretrained representations of LLMs to model student learning more effectively.
Similar Papers
TLSQKT: A Question-Aware Dual-Channel Transformer for Literacy Tracing from Learning Sequences
Computers and Society
Helps computers understand how students learn skills.
Enhancing Knowledge Tracing through Leakage-Free and Recency-Aware Embeddings
Computers and Society
Makes learning tools guess student skills better.
Leveraging Knowledge Graphs and Large Language Models to Track and Analyze Learning Trajectories
Computers and Society
Helps teachers find learning gaps and help students.