Score: 0

From Teacher to Student: Tracking Memorization Through Model Distillation

Published: June 19, 2025 | arXiv ID: 2506.16170v2

By: Simardeep Singh

Potential Business Impact:

Makes AI models safer by reducing memorization.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) are known to memorize parts of their training data, raising important concerns around privacy and security. While previous research has focused on studying memorization in pre-trained models, much less is known about how knowledge distillation (KD) affects memorization.In this study, we explore how different KD methods influence the memorization of fine-tuned task data when a large teacher model is distilled into smaller student variants.This study demonstrates that distilling a larger teacher model, fine-tuned on a dataset, into a smaller variant not only lowers computational costs and model size but also significantly reduces the memorization risks compared to standard fine-tuning approaches.

Country of Origin
🇮🇳 India

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)