Dataset distillation for memorized data: Soft labels can leak held-out teacher knowledge
By: Freya Behrens, Lenka Zdeborová
Potential Business Impact:
Teaches computers new facts from fewer examples.
Dataset distillation aims to compress training data into fewer examples via a teacher, from which a student can learn effectively. While its success is often attributed to structure in the data, modern neural networks also memorize specific facts, but if and how such memorized information is can transferred in distillation settings remains less understood. In this work, we show that students trained on soft labels from teachers can achieve non-trivial accuracy on held-out memorized data they never directly observed. This effect persists on structured data when the teacher has not generalized.To analyze it in isolation, we consider finite random i.i.d. datasets where generalization is a priori impossible and a successful teacher fit implies pure memorization. Still, students can learn non-trivial information about the held-out data, in some cases up to perfect accuracy. In those settings, enough soft labels are available to recover the teacher functionally - the student matches the teacher's predictions on all possible inputs, including the held-out memorized data. We show that these phenomena strongly depend on the temperature with which the logits are smoothed, but persist across varying network capacities, architectures and dataset compositions.
Similar Papers
From Teacher to Student: Tracking Memorization Through Model Distillation
Machine Learning (CS)
Makes AI models safer by reducing memorization.
Revisiting Knowledge Distillation: The Hidden Role of Dataset Size
Machine Learning (CS)
Makes AI learn better with less data.
Learning Task-Agnostic Representations through Multi-Teacher Distillation
Machine Learning (CS)
Makes computer models learn better from many teachers.