Score: 2

Coresets for Clustering Under Stochastic Noise

Published: October 27, 2025 | arXiv ID: 2510.23438v1

By: Lingxiao Huang , Zhize Li , Nisheeth K. Vishnoi and more

BigTech Affiliations: Princeton University

Potential Business Impact:

Cleans messy data for better computer learning.

Business Areas:
Smart Cities Real Estate

We study the problem of constructing coresets for $(k, z)$-clustering when the input dataset is corrupted by stochastic noise drawn from a known distribution. In this setting, evaluating the quality of a coreset is inherently challenging, as the true underlying dataset is unobserved. To address this, we investigate coreset construction using surrogate error metrics that are tractable and provably related to the true clustering cost. We analyze a traditional metric from prior work and introduce a new error metric that more closely aligns with the true cost. Although our metric is defined independently of the noise distribution, it enables approximation guarantees that scale with the noise level. We design a coreset construction algorithm based on this metric and show that, under mild assumptions on the data and noise, enforcing an $\varepsilon$-bound under our metric yields smaller coresets and tighter guarantees on the true clustering cost than those obtained via classical metrics. In particular, we prove that the coreset size can improve by a factor of up to $\mathrm{poly}(k)$, where $n$ is the dataset size. Experiments on real-world datasets support our theoretical findings and demonstrate the practical advantages of our approach.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΈπŸ‡¬ πŸ‡ΊπŸ‡Έ China, Singapore, United States

Page Count
49 pages

Category
Computer Science:
Machine Learning (CS)