Surrogate Representation Inference for Noisy Text and Image Annotations
By: Kentaro Nakamura
Potential Business Impact:
Makes AI learn better from messy notes.
As researchers increasingly rely on machine learning models and LLMs to annotate unstructured data, such as texts or images, various approaches have been proposed to correct bias in downstream statistical analysis. However, existing methods tend to yield large standard errors and require some error-free human annotation. In this paper, I introduce Surrogate Representation Inference (SRI), which assumes that unstructured data fully mediate the relationship between human annotations and structured variables. The assumption is guaranteed by design provided that human coders rely only on unstructured data for annotation. Under this setting, I propose a neural network architecture that learns a low-dimensional representation of unstructured data such that the surrogate assumption remains to be satisfied. When multiple human annotations are available, SRI can further correct non-differential measurement errors that may exist in human annotations. Focusing on text-as-outcome settings, I formally establish the identification conditions and semiparametric efficient estimation strategies that enable learning and leveraging such a low-dimensional representation. Simulation studies and a real-world application demonstrate that SRI reduces standard errors by over 50% when machine learning prediction accuracy is moderate and provides valid inference even when human annotations contain non-differential measurement errors.
Similar Papers
A Unifying Framework for Robust and Efficient Inference with Unstructured Data
Econometrics
Makes computers understand messy data without bias.
High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations
Machine Learning (CS)
Makes computer models of science faster and smaller.
Optimization over Trained (and Sparse) Neural Networks: A Surrogate within a Surrogate
Optimization and Control
Makes computers solve hard problems faster.