Score: 0

A Theory of Universal Rate-Distortion-Classification Representations for Lossy Compression

Published: April 14, 2025 | arXiv ID: 2504.09932v1

By: Nam Nguyen, Thinh Nguyen, Bella Bose

Potential Business Impact:

Lets one AI learn many tasks at once.

Business Areas:
DRM Content and Publishing, Media and Entertainment, Privacy and Security

In lossy compression, Blau and Michaeli [5] introduced the information rate-distortion-perception (RDP) function, extending traditional rate-distortion theory by incorporating perceptual quality. More recently, this framework was expanded by defining the rate-distortion-perception-classification (RDPC) function, integrating multi-task learning that jointly optimizes generative tasks such as perceptual quality and classification accuracy alongside reconstruction tasks [28]. To that end, motivated by the concept of a universal RDP encoder introduced in [34], we investigate universal representations that enable diverse distortion-classification tradeoffs through a single fixed encoder combined with multiple decoders. Specifically, theoretical analysis and numerical experiment demonstrate that for the Gaussian source under mean squared error (MSE) distortion, the entire distortion-classification tradeoff region can be achieved using one universal encoder. In addition, this paper characterizes achievable distortion-classification regions for fixed universal representations in general source distributions, identifying conditions that ensure minimal distortion penalty when reusing encoders across varying tradeoff points. Experimental results using MNIST and SVHN datasets validate our theoretical insights, showing that universal encoders can obtain distortion performance comparable to task-specific encoders, thus supporting the practicality and effectiveness of our proposed universal representations.

Country of Origin
🇺🇸 United States

Page Count
36 pages

Category
Computer Science:
Information Theory