Can LLMs Predict Their Own Failures? Self-Awareness via Internal Circuits
By: Amirhosein Ghasemabadi, Di Niu
Potential Business Impact:
Helps AI know when it's wrong.
Large language models (LLMs) generate fluent and complex outputs but often fail to recognize their own mistakes and hallucinations. Existing approaches typically rely on external judges, multi-sample consistency, or text-based self-critique, which incur additional compute or correlate weakly with true correctness. We ask: can LLMs predict their own failures by inspecting internal states during inference? We introduce Gnosis, a lightweight self-awareness mechanism that enables frozen LLMs to perform intrinsic self-verification by decoding signals from hidden states and attention patterns. Gnosis passively observes internal traces, compresses them into fixed-budget descriptors, and predicts correctness with negligible inference cost, adding only ~5M parameters and operating independently of sequence length. Across math reasoning, open-domain question answering, and academic knowledge benchmarks, and over frozen backbones ranging from 1.7B to 20B parameters, Gnosis consistently outperforms strong internal baselines and large external judges in both accuracy and calibration. Moreover, it generalizes zero-shot to partial generations, enabling early detection of failing trajectories and compute-aware control. These results show that reliable correctness cues are intrinsic to generation process and can be extracted efficiently without external supervision.
Similar Papers
Can LLMs Detect Their Own Hallucinations?
Computation and Language
Helps computers spot when they make up facts.
Large Language Models Do NOT Really Know What They Don't Know
Computation and Language
Computers can't tell true from fake facts.
Introspective Growth: Automatically Advancing LLM Expertise in Technology Judgment
Computation and Language
Helps computers understand complex ideas better.