Score: 1

Evaluating LLM-corrupted Crowdsourcing Data Without Ground Truth

Published: June 8, 2025 | arXiv ID: 2506.06991v1

By: Yichi Zhang , Jinlong Pang , Zhaowei Zhu and more

Potential Business Impact:

Finds fake answers from AI in online tasks.

Business Areas:
Crowdsourcing Collaboration

The recent success of generative AI highlights the crucial role of high-quality human feedback in building trustworthy AI systems. However, the increasing use of large language models (LLMs) by crowdsourcing workers poses a significant challenge: datasets intended to reflect human input may be compromised by LLM-generated responses. Existing LLM detection approaches often rely on high-dimension training data such as text, making them unsuitable for annotation tasks like multiple-choice labeling. In this work, we investigate the potential of peer prediction -- a mechanism that evaluates the information within workers' responses without using ground truth -- to mitigate LLM-assisted cheating in crowdsourcing with a focus on annotation tasks. Our approach quantifies the correlations between worker answers while conditioning on (a subset of) LLM-generated labels available to the requester. Building on prior research, we propose a training-free scoring mechanism with theoretical guarantees under a crowdsourcing model that accounts for LLM collusion. We establish conditions under which our method is effective and empirically demonstrate its robustness in detecting low-effort cheating on real-world crowdsourcing datasets.

Country of Origin
🇺🇸 United States


Page Count
33 pages

Category
Computer Science:
Artificial Intelligence