Provably faster randomized and quantum algorithms for $k$-means clustering via uniform sampling
By: Tyler Chen , Archan Ray , Akshay Seshadri and more
Potential Business Impact:
Speeds up sorting big data into groups.
The $k$-means algorithm (Lloyd's algorithm) is a widely used method for clustering unlabeled data. A key bottleneck of the $k$-means algorithm is that each iteration requires time linear in the number of data points, which can be expensive in big data applications. This was improved in recent works proposing quantum and quantum-inspired classical algorithms to approximate the $k$-means algorithm locally, in time depending only logarithmically on the number of data points (along with data dependent parameters) [$q$-means: A quantum algorithm for unsupervised machine learning; Kerenidis, Landman, Luongo, and Prakash, NeurIPS 2019; Do you know what $q$-means?, Doriguello, Luongo, Tang]. In this work, we describe a simple randomized mini-batch $k$-means algorithm and a quantum algorithm inspired by the classical algorithm. We prove worse-case guarantees that significantly improve upon the bounds for previous algorithms. Our improvements are due to a careful use of uniform sampling, which preserves certain symmetries of the $k$-means problem that are not preserved in previous algorithms that use data norm-based sampling.
Similar Papers
HARLI CQUINN: Higher Adjusted Randomness with Linear In Complexity QUantum INspired Networks for K-Means
Quantum Physics
Makes computers sort data faster and better.
Sublinear Time Quantum Sensitivity Sampling
Data Structures and Algorithms
Makes computers solve hard math problems faster.
A Quantum Bagging Algorithm with Unsupervised Base Learners for Label Corrupted Datasets
Quantum Physics
Makes computers learn better from messy information.