ACT as Human: Multimodal Large Language Model Data Annotation with Critical Thinking
By: Lequan Lin , Dai Shi , Andi Han and more
Potential Business Impact:
Teaches computers faster with less human work.
Supervised learning relies on high-quality labeled data, but obtaining such data through human annotation is both expensive and time-consuming. Recent work explores using large language models (LLMs) for annotation, but LLM-generated labels still fall short of human-level quality. To address this problem, we propose the Annotation with Critical Thinking (ACT) data pipeline, where LLMs serve not only as annotators but also as judges to critically identify potential errors. Human effort is then directed towards reviewing only the most "suspicious" cases, significantly improving the human annotation efficiency. Our major contributions are as follows: (1) ACT is applicable to a wide range of domains, including natural language processing (NLP), computer vision (CV), and multimodal understanding, by leveraging multimodal-LLMs (MLLMs). (2) Through empirical studies, we derive 7 insights on how to enhance annotation quality while efficiently reducing the human cost, and then translate these findings into user-friendly guidelines. (3) We theoretically analyze how to modify the loss function so that models trained on ACT data achieve similar performance to those trained on fully human-annotated data. Our experiments show that the performance gap can be reduced to less than 2% on most benchmark datasets while saving up to 90% of human costs.
Similar Papers
Evaluating Large Language Models as Expert Annotators
Computation and Language
Computers learn to label text like experts.
Complementary Learning Approach for Text Classification using Large Language Models
Computation and Language
Helps people and computers work together better.
LAUD: Integrating Large Language Models with Active Learning for Unlabeled Data
Machine Learning (CS)
Teaches computers to learn from less data.