Technical Report on Text Dataset Distillation
By: Keith Ando Ogawa , Bruno Lopes Yamamoto , Lucas Lauton de Alcantara and more
Potential Business Impact:
Creates small text sets that teach computers well.
In the vision domain, dataset distillation arises as a technique to condense a large dataset into a smaller synthetic one that exhibits a similar result in the training process. While image data presents an extensive literature of distillation methods, text dataset distillation has fewer works in comparison. Text dataset distillation initially grew as an adaptation of efforts from the vision universe, as the particularities of the modality became clear obstacles, it rose into a separate branch of research. Several milestones mark the development of this area, such as the introduction of methods that use transformer models, the generation of discrete synthetic text, and the scaling to decoder-only models with over 1B parameters. Despite major advances in modern approaches, the field remains in a maturing phase, with room for improvement on benchmarking standardization, approaches to overcome the discrete nature of text, handling complex tasks, and providing explicit examples of real-world applications. In this report, we review past and recent advances in dataset distillation for text, highlighting different distillation strategies, key contributions, and general challenges.
Similar Papers
Dataset Distillation for Pre-Trained Self-Supervised Vision Models
CV and Pattern Recognition
Creates small, smart picture sets for AI.
EDITS: Enhancing Dataset Distillation with Implicit Textual Semantics
CV and Pattern Recognition
Makes small data learn like big data.
Knowledge Distillation and Dataset Distillation of Large Language Models: Emerging Trends, Challenges, and Future Directions
Computation and Language
Makes big AI models smaller and faster.