Score: 2

LLM Performance Predictors: Learning When to Escalate in Hybrid Human-AI Moderation Systems

Published: January 11, 2026 | arXiv ID: 2601.07006v1

By: Or Bachar , Or Levi , Sardhendu Mishra and more

Potential Business Impact:

Helps AI know when to ask humans for help.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As LLMs are increasingly integrated into human-in-the-loop content moderation systems, a central challenge is deciding when their outputs can be trusted versus when escalation for human review is preferable. We propose a novel framework for supervised LLM uncertainty quantification, learning a dedicated meta-model based on LLM Performance Predictors (LPPs) derived from LLM outputs: log-probabilities, entropy, and novel uncertainty attribution indicators. We demonstrate that our method enables cost-aware selective classification in real-world human-AI workflows: escalating high-risk cases while automating the rest. Experiments across state-of-the-art LLMs, including both off-the-shelf (Gemini, GPT) and open-source (Llama, Qwen), on multimodal and multilingual moderation tasks, show significant improvements over existing uncertainty estimators in accuracy-cost trade-offs. Beyond uncertainty estimation, the LPPs enhance explainability by providing new insights into failure conditions (e.g., ambiguous content vs. under-specified policy). This work establishes a principled framework for uncertainty-aware, scalable, and responsible human-AI moderation workflows.

Country of Origin
🇮🇱 Israel

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Artificial Intelligence