The Performance of Compression-Based Denoisers
By: Dan Song, Ayfer Özgür, Tsachy Weissman
We consider a denoiser that reconstructs a stationary ergodic source by lossily compressing samples of the source observed through a memoryless noisy channel. Prior work on compression-based denoising has been limited to additive noise channels. We extend this framework to general discrete memoryless channels by deliberately choosing the distortion measure for the lossy compressor to match the channel conditional distribution. By bounding the deviation of the empirical joint distribution of the source, observation, and denoiser outputs from satisfying a Markov property, we give an exact characterization of the loss achieved by such a denoiser. Consequences of these results are explicitly demonstrated in special cases, including for MSE and Hamming loss. A comparison is made to an indirect rate-distortion perspective on the problem.
Similar Papers
Compression with Privacy-Preserving Random Access
Information Theory
Keeps secrets safe while shrinking files.
RDD: Pareto Analysis of the Rate-Distortion-Distinguishability Trade-off
Signal Processing
Finds hidden problems in data, even when compressed.
Information-Theoretic Equivalences Across Rate-Distortion, Quantization, and Decoding
Information Theory
Makes data compression and error correction work together.