On the Generalization Error of Differentially Private Algorithms Via Typicality
By: Yanxiao Liu , Chun Hei Michael Shiu , Lele Wang and more
We study the generalization error of stochastic learning algorithms from an information-theoretic perspective, with a particular emphasis on deriving sharper bounds for differentially private algorithms. It is well known that the generalization error of stochastic learning algorithms can be bounded in terms of mutual information and maximal leakage, yielding in-expectation and high-probability guarantees, respectively. In this work, we further upper bound mutual information and maximal leakage by explicit, easily computable formulas, using typicality-based arguments and exploiting the stability properties of private algorithms. In the first part of the paper, we strictly improve the mutual-information bounds by Rodríguez-Gálvez et al. (IEEE Trans. Inf. Theory, 2021). In the second part, we derive new upper bounds on the maximal leakage of learning algorithms. In both cases, the resulting bounds on information measures translate directly into generalization error guarantees.
Similar Papers
Simple and Sharp Generalization Bounds via Lifting
Statistics Theory
Makes computer learning more accurate and faster.
Mutual Information Free Topological Generalization Bounds via Stability
Machine Learning (CS)
Makes computer learning more predictable and understandable.
Simple and Sharp Generalization Bounds via Lifting
Statistics Theory
Makes computer learning more accurate and faster.