Minority Reports: Balancing Cost and Quality in Ground Truth Data Annotation
By: Hsuan Wei Liao , Christopher Klugmann , Daniel Kondermann and more
Potential Business Impact:
Saves money by making computer learning smarter.
High-quality data annotation is an essential but laborious and costly aspect of developing machine learning-based software. We explore the inherent tradeoff between annotation accuracy and cost by detecting and removing minority reports -- instances where annotators provide incorrect responses -- that indicate unnecessary redundancy in task assignments. We propose an approach to prune potentially redundant annotation task assignments before they are executed by estimating the likelihood of an annotator disagreeing with the majority vote for a given task. Our approach is informed by an empirical analysis over computer vision datasets annotated by a professional data annotation platform, which reveals that the likelihood of a minority report event is dependent primarily on image ambiguity, worker variability, and worker fatigue. Simulations over these datasets show that we can reduce the number of annotations required by over 60% with a small compromise in label quality, saving approximately 6.6 days-equivalent of labor. Our approach provides annotation service platforms with a method to balance cost and dataset quality. Machine learning practitioners can tailor annotation accuracy levels according to specific application needs, thereby optimizing budget allocation while maintaining the data quality necessary for critical settings like autonomous driving technology.
Similar Papers
Cost-Optimal Active AI Model Evaluation
Machine Learning (CS)
Makes AI better by smarter use of helpers.
Data Annotation Quality Problems in AI-Enabled Perception System Development
Software Engineering
Finds mistakes in AI car driving data.
Balancing Quality and Variation: Spam Filtering Distorts Data Label Distributions
Computation and Language
Keeps helpful opinions, removes bad ones, keeps variety.