Polysemantic Dropout: Conformal OOD Detection for Specialized LLMs
By: Ayush Gupta , Ramneet Kaur , Anirban Roy and more
Potential Business Impact:
Keeps smart computer programs from making mistakes.
We propose a novel inference-time out-of-domain (OOD) detection algorithm for specialized large language models (LLMs). Despite achieving state-of-the-art performance on in-domain tasks through fine-tuning, specialized LLMs remain vulnerable to incorrect or unreliable outputs when presented with OOD inputs, posing risks in critical applications. Our method leverages the Inductive Conformal Anomaly Detection (ICAD) framework, using a new non-conformity measure based on the model's dropout tolerance. Motivated by recent findings on polysemanticity and redundancy in LLMs, we hypothesize that in-domain inputs exhibit higher dropout tolerance than OOD inputs. We aggregate dropout tolerance across multiple layers via a valid ensemble approach, improving detection while maintaining theoretical false alarm bounds from ICAD. Experiments with medical-specialized LLMs show that our approach detects OOD inputs better than baseline methods, with AUROC improvements of $2\%$ to $37\%$ when treating OOD datapoints as positives and in-domain test datapoints as negatives.
Similar Papers
Polysemantic Dropout: Conformal OOD Detection for Specialized LLMs
Computation and Language
Keeps AI from making mistakes on new topics.
Graph Synthetic Out-of-Distribution Exposure with Large Language Models
Machine Learning (CS)
Finds fake data in computer networks.
SupLID: Geometrical Guidance for Out-of-Distribution Detection in Semantic Segmentation
CV and Pattern Recognition
Helps self-driving cars spot weird things.