Score: 0

Universal Rate-Distortion-Classification Representations for Lossy Compression

Published: April 12, 2025 | arXiv ID: 2504.09025v2

By: Nam Nguyen , Thuan Nguyen , Thinh Nguyen and more

Potential Business Impact:

Makes one computer brain learn many tasks.

Business Areas:
Semantic Web Internet Services

In lossy compression, Wang et al. [1] recently introduced the rate-distortion-perception-classification function, which supports multi-task learning by jointly optimizing perceptual quality, classification accuracy, and reconstruction fidelity. Building on the concept of a universal encoder introduced in [2], we investigate the universal representations that enable a broad range of distortion-classification tradeoffs through a single shared encoder coupled with multiple task-specific decoders. We establish, through both theoretical analysis and numerical experiments, that for Gaussian source under mean squared error (MSE) distortion, the entire distortion-classification tradeoff region can be achieved using a single universal encoder. For general sources, we characterize the achievable region and identify conditions under which encoder reuse results in negligible distortion penalty. The experimental result on the MNIST dataset further supports our theoretical findings. We show that universal encoders can obtain distortion performance comparable to task-specific encoders. These results demonstrate the practicality and effectiveness of the proposed universal framework in multi-task compression scenarios.

Country of Origin
🇺🇸 United States

Page Count
11 pages

Category
Computer Science:
Information Theory