Universal Rate-Distortion-Classification Representations for Lossy Compression
By: Nam Nguyen , Thuan Nguyen , Thinh Nguyen and more
Potential Business Impact:
Makes one computer brain learn many tasks.
In lossy compression, Wang et al. [1] recently introduced the rate-distortion-perception-classification function, which supports multi-task learning by jointly optimizing perceptual quality, classification accuracy, and reconstruction fidelity. Building on the concept of a universal encoder introduced in [2], we investigate the universal representations that enable a broad range of distortion-classification tradeoffs through a single shared encoder coupled with multiple task-specific decoders. We establish, through both theoretical analysis and numerical experiments, that for Gaussian source under mean squared error (MSE) distortion, the entire distortion-classification tradeoff region can be achieved using a single universal encoder. For general sources, we characterize the achievable region and identify conditions under which encoder reuse results in negligible distortion penalty. The experimental result on the MNIST dataset further supports our theoretical findings. We show that universal encoders can obtain distortion performance comparable to task-specific encoders. These results demonstrate the practicality and effectiveness of the proposed universal framework in multi-task compression scenarios.
Similar Papers
A Theory of Universal Rate-Distortion-Classification Representations for Lossy Compression
Information Theory
Lets one AI learn many tasks at once.
Universal Representations for Classification-enhanced Lossy Compression
CV and Pattern Recognition
Makes one computer program work for many tasks.
Rate-Distortion-Perception Theory for the Quadratic Wasserstein Space
Information Theory
Makes pictures clearer with less data.