A High-Throughput GPU Framework for Adaptive Lossless Compression of Floating-Point Data
By: Zheng Li , Weiyan Wang , Ruiyuan Li and more
Potential Business Impact:
Shrinks big computer data without losing any details.
The torrential influx of floating-point data from domains like IoT and HPC necessitates high-performance lossless compression to mitigate storage costs while preserving absolute data fidelity. Leveraging GPU parallelism for this task presents significant challenges, including bottlenecks in heterogeneous data movement, complexities in executing precision-preserving conversions, and performance degradation due to anomaly-induced sparsity. To address these challenges, this paper introduces a novel GPU-based framework for floating-point adaptive lossless compression. The proposed solution employs three key innovations: a lightweight asynchronous pipeline that effectively hides I/O latency during CPU-GPU data transfer; a fast and theoretically guaranteed float-to-integer conversion method that eliminates errors inherent in floating-point arithmetic; and an adaptive sparse bit-plane encoding strategy that mitigates the sparsity caused by outliers. Extensive experiments on 12 diverse datasets demonstrate that the proposed framework significantly outperforms state-of-the-art competitors, achieving an average compression ratio of 0.299 (a 9.1% relative improvement over the best competitor), an average compression throughput of 10.82 GB/s (2.4x higher), and an average decompression throughput of 12.32 GB/s (2.4x higher).
Similar Papers
GPU-Based Floating-point Adaptive Lossless Compression
Databases
Makes computer data smaller, faster, and perfect.
GPZ: GPU-Accelerated Lossy Compressor for Particle Data
Distributed, Parallel, and Cluster Computing
Makes huge science data smaller and faster.
70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float
Machine Learning (CS)
Makes big AI models smaller, faster, and run anywhere.