Pairwise Comparison for Bias Identification and Quantification
By: Fabian Haak, Philipp Schaer
Linguistic bias in online news and social media is widespread but difficult to measure. Yet, its identification and quantification remain difficult due to subjectivity, context dependence, and the scarcity of high-quality gold-label datasets. We aim to reduce annotation effort by leveraging pairwise comparison for bias annotation. To overcome the costliness of the approach, we evaluate more efficient implementations of pairwise comparison-based rating. We achieve this by investigating the effects of various rating techniques and the parameters of three cost-aware alternatives in a simulation environment. Since the approach can in principle be applied to both human and large language model annotation, our work provides a basis for creating high-quality benchmark datasets and for quantifying biases and other subjective linguistic aspects. The controlled simulations include latent severity distributions, distance-calibrated noise, and synthetic annotator bias to probe robustness and cost-quality trade-offs. In applying the approach to human-labeled bias benchmark datasets, we then evaluate the most promising setups and compare them to direct assessment by large language models and unmodified pairwise comparison labels as baselines. Our findings support the use of pairwise comparison as a practical foundation for quantifying subjective linguistic aspects, enabling reproducible bias analysis. We contribute an optimization of comparison and matchmaking components, an end-to-end evaluation including simulation and real-data application, and an implementation blueprint for cost-aware large-scale annotation
Similar Papers
Direct-Scoring NLG Evaluators Can Use Pairwise Comparisons Too
Computation and Language
Lets computers give a grade to writing.
Efficient Bayesian Inference from Noisy Pairwise Comparisons
Machine Learning (CS)
Makes AI better by learning from people's opinions.
From Replication to Redesign: Exploring Pairwise Comparisons for LLM-Based Peer Review
Computation and Language
Helps choose the best science papers faster.