IntroLM: Introspective Language Models via Prefilling-Time Self-Evaluation
By: Hossein Hosseini Kasnavieh , Gholamreza Haffari , Chris Leckie and more
Potential Business Impact:
Helps AI know if its answer is good.
A major challenge for the operation of large language models (LLMs) is how to predict whether a specific LLM will produce sufficiently high-quality output for a given query. Existing approaches rely on external classifiers, most commonly BERT based models, which suffer from limited context windows, constrained representational capacity, and additional computational overhead. We propose IntroLM, a method that enables causal language models to predict their own output quality during the prefilling phase without affecting generation using introspective tokens. By introducing token conditional LoRA that activates only for the introspective token, the model learns to predict the output quality for a given query while preserving the original backbone behavior and avoiding external evaluators. On question answering benchmarks, IntroLM applied to Qwen3 8B achieves a ROC AUC of 90 precent for success prediction, outperforming a DeBERTa classifier by 14 precent. When integrated into multi model routing systems, IntroLM achieves superior cost performance tradeoffs, reducing latency by up to 33 precent and large model usage by up to 50 precent at matched reliability.
Similar Papers
Introspective Growth: Automatically Advancing LLM Expertise in Technology Judgment
Computation and Language
Helps computers understand complex ideas better.
Emergent Introspective Awareness in Large Language Models
Computation and Language
Computers can sometimes know what they are thinking.
Enhancing LLM Instruction Following: An Evaluation-Driven Multi-Agentic Workflow for Prompt Instructions Optimization
Artificial Intelligence
Makes AI follow rules better for correct answers.