Does the Data Processing Inequality Reflect Practice? On the Utility of Low-Level Tasks
By: Roy Turgeman, Tom Tirer
Potential Business Impact:
Makes computers learn better with messy data.
The data processing inequality is an information-theoretic principle stating that the information content of a signal cannot be increased by processing the observations. In particular, it suggests that there is no benefit in enhancing the signal or encoding it before addressing a classification problem. This assertion can be proven to be true for the case of the optimal Bayes classifier. However, in practice, it is common to perform "low-level" tasks before "high-level" downstream tasks despite the overwhelming capabilities of modern deep neural networks. In this paper, we aim to understand when and why low-level processing can be beneficial for classification. We present a comprehensive theoretical study of a binary classification setup, where we consider a classifier that is tightly connected to the optimal Bayes classifier and converges to it as the number of training samples increases. We prove that for any finite number of training samples, there exists a pre-classification processing that improves the classification accuracy. We also explore the effect of class separation, training set size, and class balance on the relative gain from this procedure. We support our theory with an empirical investigation of the theoretical setup. Finally, we conduct an empirical study where we investigate the effect of denoising and encoding on the performance of practical deep classifiers on benchmark datasets. Specifically, we vary the size and class distribution of the training set, and the noise level, and demonstrate trends that are consistent with our theoretical results.
Similar Papers
Noise Quantification and Control in Circuits via Strong Data-Processing Inequalities
Information Theory
Makes computers work even with errors.
Optimal Fairness under Local Differential Privacy
Machine Learning (CS)
Makes private data fair for computers.
A DPI-PAC-Bayesian Framework for Generalization Bounds
Information Theory
Makes computer learning more accurate and reliable.