Learning Reduced Representations for Quantum Classifiers
By: Patrick Odagiu , Vasilis Belis , Lennart Schulze and more
Potential Business Impact:
Helps quantum computers learn from huge amounts of data.
Data sets that are specified by a large number of features are currently outside the area of applicability for quantum machine learning algorithms. An immediate solution to this impasse is the application of dimensionality reduction methods before passing the data to the quantum algorithm. We investigate six conventional feature extraction algorithms and five autoencoder-based dimensionality reduction models to a particle physics data set with 67 features. The reduced representations generated by these models are then used to train a quantum support vector machine for solving a binary classification problem: whether a Higgs boson is produced in proton collisions at the LHC. We show that the autoencoder methods learn a better lower-dimensional representation of the data, with the method we design, the Sinkclass autoencoder, performing 40% better than the baseline. The methods developed here open up the applicability of quantum machine learning to a larger array of data sets. Moreover, we provide a recipe for effective dimensionality reduction in this context.
Similar Papers
Influence of Data Dimensionality Reduction Methods on the Effectiveness of Quantum Machine Learning Models
Quantum Physics
Makes quantum computers work better for learning.
Modeling Quantum Autoencoder Trainable Kernel for IoT Anomaly Detection
Machine Learning (CS)
Quantum computers catch hackers faster than normal ones.
Learning Minimal Representations of Many-Body Physics from Snapshots of a Quantum Simulator
Quantum Physics
Teaches computers to find hidden patterns in quantum experiments.