Distilling Dataset into Neural Field
By: Donghyeok Shin , HeeSun Bae , Gyuwon Sim and more
Potential Business Impact:
Makes big computer learning files much smaller.
Utilizing a large-scale dataset is essential for training high-performance deep learning models, but it also comes with substantial computation and storage costs. To overcome these challenges, dataset distillation has emerged as a promising solution by compressing the large-scale dataset into a smaller synthetic dataset that retains the essential information needed for training. This paper proposes a novel parameterization framework for dataset distillation, coined Distilling Dataset into Neural Field (DDiF), which leverages the neural field to store the necessary information of the large-scale dataset. Due to the unique nature of the neural field, which takes coordinates as input and output quantity, DDiF effectively preserves the information and easily generates various shapes of data. We theoretically confirm that DDiF exhibits greater expressiveness than some previous literature when the utilized budget for a single synthetic instance is the same. Through extensive experiments, we demonstrate that DDiF achieves superior performance on several benchmark datasets, extending beyond the image domain to include video, audio, and 3D voxel. We release the code at https://github.com/aailab-kaist/DDiF.
Similar Papers
Efficient Dataset Distillation through Low-Rank Space Sampling
CV and Pattern Recognition
Makes AI learn faster with less data.
Dataset Distillation with Probabilistic Latent Features
CV and Pattern Recognition
Makes big computer brains learn with less data.
Improving Noise Efficiency in Privacy-preserving Dataset Distillation
CV and Pattern Recognition
Makes private data safe for computers to learn.