Unifying Information-Theoretic and Pair-Counting Clustering Similarity
By: Alexander J. Gates
Potential Business Impact:
Unifies ways to check how well computer groups match.
Comparing clusterings is central to evaluating unsupervised models, yet the many existing similarity measures can produce widely divergent, sometimes contradictory, evaluations. Clustering similarity measures are typically organized into two principal families, pair-counting and information-theoretic, reflecting whether they quantify agreement through element pairs or aggregate information across full cluster contingency tables. Prior work has uncovered parallels between these families and applied empirical normalization or chance-correction schemes, but their deeper analytical connection remains only partially understood. Here, we develop an analytical framework that unifies these families through two complementary perspectives. First, both families are expressed as weighted expansions of observed versus expected co-occurrences, with pair-counting arising as a quadratic, low-order approximation and information-theoretic measures as higher-order, frequency-weighted extensions. Second, we generalize pair-counting to $k$-tuple agreement and show that information-theoretic measures can be viewed as systematically accumulating higher-order co-assignment structure beyond the pairwise level. We illustrate the approaches analytically for the Rand index and Mutual Information, and show how other indices in each family emerge as natural extensions. Together, these views clarify when and why the two regimes diverge, relating their sensitivities directly to weighting and approximation order, and provide a principled basis for selecting, interpreting, and extending clustering similarity measures across applications.
Similar Papers
The Information Theory of Similarity
Information Theory
Makes computers understand how alike things are.
Similarity as Thermodynamic Work: Between Depth and Diversity -- from Information Distance to Ugly Duckling
Information Theory
Finds the true meaning hidden in data.
Statistical Inference for Manifold Similarity and Alignability across Noisy High-Dimensional Datasets
Statistics Theory
Compares complex data by looking at its hidden shapes.