Score: 1

Dataset distillation for memorized data: Soft labels can leak held-out teacher knowledge

Published: June 17, 2025 | arXiv ID: 2506.14457v1

By: Freya Behrens, Lenka Zdeborová

Potential Business Impact:

Teaches computers new facts from fewer examples.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Dataset distillation aims to compress training data into fewer examples via a teacher, from which a student can learn effectively. While its success is often attributed to structure in the data, modern neural networks also memorize specific facts, but if and how such memorized information is can transferred in distillation settings remains less understood. In this work, we show that students trained on soft labels from teachers can achieve non-trivial accuracy on held-out memorized data they never directly observed. This effect persists on structured data when the teacher has not generalized.To analyze it in isolation, we consider finite random i.i.d. datasets where generalization is a priori impossible and a successful teacher fit implies pure memorization. Still, students can learn non-trivial information about the held-out data, in some cases up to perfect accuracy. In those settings, enough soft labels are available to recover the teacher functionally - the student matches the teacher's predictions on all possible inputs, including the held-out memorized data. We show that these phenomena strongly depend on the temperature with which the logits are smoothed, but persist across varying network capacities, architectures and dataset compositions.

Country of Origin
🇨🇭 Switzerland

Repos / Data Links

Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)