Universal Representations for Classification-enhanced Lossy Compression
By: Nam Nguyen
Potential Business Impact:
Makes one computer program work for many tasks.
In lossy compression, the classical tradeoff between compression rate and reconstruction distortion has traditionally guided algorithm design. However, Blau and Michaeli [5] introduced a generalized framework, known as the rate-distortion-perception (RDP) function, incorporating perceptual quality as an additional dimension of evaluation. More recently, the rate-distortion-classification (RDC) function was investigated in [19], evaluating compression performance by considering classification accuracy alongside distortion. In this paper, we explore universal representations, where a single encoder is developed to achieve multiple decoding objectives across various distortion and classification (or perception) constraints. This universality avoids retraining encoders for each specific operating point within these tradeoffs. Our experimental validation on the MNIST dataset indicates that a universal encoder incurs only minimal performance degradation compared to individually optimized encoders for perceptual image compression tasks, aligning with prior results from [23]. Nonetheless, we also identify that in the RDC setting, reusing an encoder optimized for one specific classification-distortion tradeoff leads to a significant distortion penalty when applied to alternative points.
Similar Papers
A Theory of Universal Rate-Distortion-Classification Representations for Lossy Compression
Information Theory
Lets one AI learn many tasks at once.
Universal Rate-Distortion-Classification Representations for Lossy Compression
Information Theory
Makes one computer brain learn many tasks.
Optimal Neural Compressors for the Rate-Distortion-Perception Tradeoff
Information Theory
Makes pictures smaller with less lost detail.