Score: 1

Distilling Dataset into Neural Field

Published: March 5, 2025 | arXiv ID: 2503.04835v1

By: Donghyeok Shin , HeeSun Bae , Gyuwon Sim and more

Potential Business Impact:

Makes big computer learning files much smaller.

Business Areas:
Big Data Data and Analytics

Utilizing a large-scale dataset is essential for training high-performance deep learning models, but it also comes with substantial computation and storage costs. To overcome these challenges, dataset distillation has emerged as a promising solution by compressing the large-scale dataset into a smaller synthetic dataset that retains the essential information needed for training. This paper proposes a novel parameterization framework for dataset distillation, coined Distilling Dataset into Neural Field (DDiF), which leverages the neural field to store the necessary information of the large-scale dataset. Due to the unique nature of the neural field, which takes coordinates as input and output quantity, DDiF effectively preserves the information and easily generates various shapes of data. We theoretically confirm that DDiF exhibits greater expressiveness than some previous literature when the utilized budget for a single synthetic instance is the same. Through extensive experiments, we demonstrate that DDiF achieves superior performance on several benchmark datasets, extending beyond the image domain to include video, audio, and 3D voxel. We release the code at https://github.com/aailab-kaist/DDiF.

Country of Origin
🇰🇷 Korea, Republic of

Repos / Data Links

Page Count
32 pages

Category
Computer Science:
CV and Pattern Recognition