Online Correlation Clustering: Simultaneously Optimizing All $\ell_p$-norms
By: Sami Davies, Benjamin Moseley, Heather Newman
Potential Business Impact:
Groups items fairly and accurately, even with missing data.
The $\ell_p$-norm objectives for correlation clustering present a fundamental trade-off between minimizing total disagreements (the $\ell_1$-norm) and ensuring fairness to individual nodes (the $\ell_\infty$-norm). Surprisingly, in the offline setting it is possible to simultaneously approximate all $\ell_p$-norms with a single clustering. Can this powerful guarantee be achieved in an online setting? This paper provides the first affirmative answer. We present a single algorithm for the online-with-a-sample (AOS) model that, given a small constant fraction of the input as a sample, produces one clustering that is simultaneously $O(\log^4 n)$-competitive for all $\ell_p$-norms with high probability, $O(\log n)$-competitive for the $\ell_\infty$-norm with high probability, and $O(1)$-competitive for the $\ell_1$-norm in expectation. This work successfully translates the offline "all-norms" guarantee to the online world. Our setting is motivated by a new hardness result that demonstrates a fundamental separation between these objectives in the standard random-order (RO) online model. Namely, while the $\ell_1$-norm is trivially $O(1)$-approximable in the RO model, we prove that any algorithm in the RO model for the fairness-promoting $\ell_\infty$-norm must have a competitive ratio of at least $\Omega(n^{1/3})$. This highlights the necessity of a different beyond-worst-case model. We complement our algorithm with lower bounds, showing our competitive ratios for the $\ell_1$- and $\ell_\infty$- norms are nearly tight in the AOS model.
Similar Papers
Improved Online Load Balancing in the Two-Norm
Data Structures and Algorithms
Makes computer tasks finish faster and fairer.
An Efficient Algorithm for Minimizing Ordered Norms in Fractional Load Balancing
Data Structures and Algorithms
Helps computers share resources fairly and efficiently.
Implicitly Normalized Online PCA: A Regularized Algorithm with Exact High-Dimensional Dynamics
Machine Learning (Stat)
Teaches computers to learn better from data.