A New Framework for Convex Clustering in Kernel Spaces: Finite Sample Bounds, Consistency and Performance Insights
By: Shubhayan Pan , Saptarshi Chakraborty , Debolina Paul and more
Potential Business Impact:
Groups messy data into clear patterns.
Convex clustering is a well-regarded clustering method, resembling the similar centroid-based approach of Lloyd's $k$-means, without requiring a predefined cluster count. It starts with each data point as its centroid and iteratively merges them. Despite its advantages, this method can fail when dealing with data exhibiting linearly non-separable or non-convex structures. To mitigate the limitations, we propose a kernelized extension of the convex clustering method. This approach projects the data points into a Reproducing Kernel Hilbert Space (RKHS) using a feature map, enabling convex clustering in this transformed space. This kernelization not only allows for better handling of complex data distributions but also produces an embedding in a finite-dimensional vector space. We provide a comprehensive theoretical underpinnings for our kernelized approach, proving algorithmic convergence and establishing finite sample bounds for our estimates. The effectiveness of our method is demonstrated through extensive experiments on both synthetic and real-world datasets, showing superior performance compared to state-of-the-art clustering techniques. This work marks a significant advancement in the field, offering an effective solution for clustering in non-linear and non-convex data scenarios.
Similar Papers
Kernel-Based Nonparametric Tests For Shape Constraints
Machine Learning (Stat)
Helps make better money choices with math.
Kernel-Based Nonparametric Tests For Shape Constraints
Machine Learning (Stat)
Helps make better money choices with math.
A Kernel-based Stochastic Approximation Framework for Nonlinear Operator Learning
Machine Learning (Stat)
Teaches computers to solve hard math problems.