Compressive Modeling and Visualization of Multivariate Scientific Data using Implicit Neural Representation
By: Abhay Kumar Dwivedi, Shanu Saklani, Soumya Dutta
Potential Business Impact:
Shrinks big science data, keeping all details.
The extensive adoption of Deep Neural Networks has led to their increased utilization in challenging scientific visualization tasks. Recent advancements in building compressed data models using implicit neural representations have shown promising results for tasks like spatiotemporal volume visualization and super-resolution. Inspired by these successes, we develop compressed neural representations for multivariate datasets containing tens to hundreds of variables. Our approach utilizes a single network to learn representations for all data variables simultaneously through parameter sharing. This allows us to achieve state-of-the-art data compression. Through comprehensive evaluations, we demonstrate superior performance in terms of reconstructed data quality, rendering and visualization quality, preservation of dependency information among variables, and storage efficiency.
Similar Papers
Compressive Meta-Learning
Machine Learning (CS)
Learns from data without seeing all of it.
Accelerated Volumetric Compression without Hierarchies: A Fourier Feature Based Implicit Neural Representation Approach
CV and Pattern Recognition
Makes 3D images smaller and faster to use.
Scientific Data Compression and Super-Resolution Sampling
Machine Learning (CS)
Saves space, recovers lost data, keeps science accurate.