Generalization Bounds for Quantum Learning via Rényi Divergences
By: Naqueeb Ahmad Warsi, Ayanava Dasgupta, Masahito Hayashi
Potential Business Impact:
Makes smart computers learn better from less data.
This work advances the theoretical understanding of quantum learning by establishing a new family of upper bounds on the expected generalization error of quantum learning algorithms, leveraging the framework introduced by Caro et al. (2024) and a new definition for the expected true loss. Our primary contribution is the derivation of these bounds in terms of quantum and classical R\'enyi divergences, utilizing a variational approach for evaluating quantum R\'enyi divergences, specifically the Petz and a newly introduced modified sandwich quantum R\'enyi divergence. Analytically and numerically, we demonstrate the superior performance of the bounds derived using the modified sandwich quantum R\'enyi divergence compared to those based on the Petz divergence. Furthermore, we provide probabilistic generalization error bounds using two distinct techniques: one based on the modified sandwich quantum R\'enyi divergence and classical R\'enyi divergence, and another employing smooth max R\'enyi divergence.
Similar Papers
Generalization Bounds in Hybrid Quantum-Classical Machine Learning Models
Quantum Physics
Helps computers learn better from data.
On the Generalization of Adversarially Trained Quantum Classifiers
Quantum Physics
Makes quantum computers safer from tricky attacks.
Estimating quantum relative entropies on quantum computers
Quantum Physics
Compares secret quantum information faster.