Lost in the Noise: How Reasoning Models Fail with Contextual Distractors
By: Seongyun Lee , Yongrae Jo , Minju Seo and more
Recent advances in reasoning models and agentic AI systems have led to an increased reliance on diverse external information. However, this shift introduces input contexts that are inherently noisy, a reality that current sanitized benchmarks fail to capture. We introduce NoisyBench, a comprehensive benchmark that systematically evaluates model robustness across 11 datasets in RAG, reasoning, alignment, and tool-use tasks against diverse noise types, including random documents, irrelevant chat histories, and hard negative distractors. Our evaluation reveals a catastrophic performance drop of up to 80% in state-of-the-art models when faced with contextual distractors. Crucially, we find that agentic workflows often amplify these errors by over-trusting noisy tool outputs, and distractors can trigger emergent misalignment even without adversarial intent. We find that prompting, context engineering, SFT, and outcome-reward only RL fail to ensure robustness; in contrast, our proposed Rationale-Aware Reward (RARE) significantly strengthens resilience by incentivizing the identification of helpful information within noise. Finally, we uncover an inverse scaling trend where increased test-time computation leads to worse performance in noisy settings and demonstrate via attention visualization that models disproportionately focus on distractor tokens, providing vital insights for building the next generation of robust, reasoning-capable agents.
Similar Papers
†DAGGER: Distractor-Aware Graph Generation for Executable Reasoning in Math Problems
Computation and Language
Helps computers solve math problems with distractions.
When Small Models Are Right for Wrong Reasons: Process Verification for Trustworthy Agents
Machine Learning (CS)
Fixes AI that gives right answers for wrong reasons.
Dynamic Context Selection for Retrieval-Augmented Generation: Mitigating Distractors and Positional Bias
Information Retrieval
Finds better answers by choosing the best info.