Limits of quantum generative models with classical sampling hardness
By: Sabrina Herbst, Ivona Brandić, Adrián Pérez-Salinas
Sampling tasks have been successful in establishing quantum advantages both in theory and experiments. This has fueled the use of quantum computers for generative modeling to create samples following the probability distribution underlying a given dataset. In particular, the potential to build generative models on classically hard distributions would immediately preclude classical simulability, due to theoretical separations. In this work, we study quantum generative models from the perspective of output distributions, showing that models that anticoncentrate are not trainable on average, including those exhibiting quantum advantage. In contrast, models outputting data from sparse distributions can be trained. We consider special cases to enhance trainability, and observe that this opens the path for classical algorithms for surrogate sampling. This observed trade-off is linked to verification of quantum processes. We conclude that quantum advantage can still be found in generative models, although its source must be distinct from anticoncentration.
Similar Papers
Generative quantum advantage for classical and quantum problems
Quantum Physics
Quantum computers learn and create things impossible for regular computers.
Quantum latent distributions in deep generative models
Machine Learning (CS)
Quantum computers help AI make better pictures.
Prospects for quantum advantage in machine learning from the representability of functions
Quantum Physics
Finds when computers can do things quantum computers can't.