Quantum Boltzmann Machines for Sample-Efficient Reinforcement Learning
By: Thore Gerlach, Michael Schenk, Verena Kain
Potential Business Impact:
Makes computers learn faster with less effort.
We introduce theoretically grounded Continuous Semi-Quantum Boltzmann Machines (CSQBMs) that supports continuous-action reinforcement learning. By combining exponential-family priors over visible units with quantum Boltzmann distributions over hidden units, CSQBMs yield a hybrid quantum-classical model that reduces qubit requirements while retaining strong expressiveness. Crucially, gradients with respect to continuous variables can be computed analytically, enabling direct integration into Actor-Critic algorithms. Building on this, we propose a continuous Q-learning framework that replaces global maximization by efficient sampling from the CSQBM distribution, thereby overcoming instability issues in continuous control.
Similar Papers
Quantum-Boosted High-Fidelity Deep Learning
Machine Learning (CS)
Helps computers understand complex science data better.
Unlocking the Power of Boltzmann Machines by Parallelizable Sampler and Efficient Temperature Estimation
Machine Learning (CS)
Makes smart computers learn faster and better.
Quantum Boltzmann Machines using Parallel Annealing for Medical Image Classification
Quantum Physics
Trains smart computer programs much faster.