Challenges and Solutions in Selecting Optimal Lossless Data Compression Algorithms
By: Md. Atiqur Rahman, MM Fazle Rabbi
Potential Business Impact:
Finds best way to shrink files without losing info.
The rapid growth of digital data has heightened the demand for efficient lossless compression methods. However, existing algorithms exhibit trade-offs: some achieve high compression ratios, others excel in encoding or decoding speed, and none consistently perform best across all dimensions. This mismatch complicates algorithm selection for applications where multiple performance metrics are simultaneously critical, such as medical imaging, which requires both compact storage and fast retrieval. To address this challenge, we present a mathematical framework that integrates compression ratio, encoding time, and decoding time into a unified performance score. The model normalizes and balances these metrics through a principled weighting scheme, enabling objective and fair comparisons among diverse algorithms. Extensive experiments on image and text datasets validate the approach, showing that it reliably identifies the most suitable compressor for different priority settings. Results also reveal that while modern learning-based codecs often provide superior compression ratios, classical algorithms remain advantageous when speed is paramount. The proposed framework offers a robust and adaptable decision-support tool for selecting optimal lossless data compression techniques, bridging theoretical measures with practical application needs.
Similar Papers
Lossless Compression of Time Series Data: A Comparative Study
Information Theory
Makes storing and sending data much smaller.
Lossless Compression: A New Benchmark for Time Series Model Evaluation
Machine Learning (CS)
Tests computer models by how well they shrink data.
Lossy Compression of Scientific Data: Applications Constrains and Requirements
Instrumentation and Methods for Astrophysics
Shrinks huge science data without losing discoveries.