Modified K-means Algorithm with Local Optimality Guarantees
By: Mingyi Li, Michael R. Metel, Akiko Takeda
Potential Business Impact:
Makes computer groups more accurate and reliable.
The K-means algorithm is one of the most widely studied clustering algorithms in machine learning. While extensive research has focused on its ability to achieve a globally optimal solution, there still lacks a rigorous analysis of its local optimality guarantees. In this paper, we first present conditions under which the K-means algorithm converges to a locally optimal solution. Based on this, we propose simple modifications to the K-means algorithm which ensure local optimality in both the continuous and discrete sense, with the same computational complexity as the original K-means algorithm. As the dissimilarity measure, we consider a general Bregman divergence, which is an extension of the squared Euclidean distance often used in the K-means algorithm. Numerical experiments confirm that the K-means algorithm does not always find a locally optimal solution in practice, while our proposed methods provide improved locally optimal solutions with reduced clustering loss. Our code is available at https://github.com/lmingyi/LO-K-means.
Similar Papers
LocalKMeans: Convergence of Lloyd's Algorithm with Distributed Local Iterations
Machine Learning (Stat)
Makes computer learning faster on many machines.
K*-Means: A Parameter-free Clustering Algorithm
Machine Learning (CS)
Finds hidden groups in data without guessing.
An Observation on Lloyd's k-Means Algorithm in High Dimensions
Machine Learning (Stat)
Fixes computer grouping when data is messy.