Score: 1

ContextFocus: Activation Steering for Contextual Faithfulness in Large Language Models

Published: January 7, 2026 | arXiv ID: 2601.04131v1

By: Nikhil Anand , Shwetha Somasundaram , Anirudh Phukan and more

Potential Business Impact:

Makes AI trust new facts over old ones.

Business Areas:
Semantic Search Internet Services

Large Language Models (LLMs) encode vast amounts of parametric knowledge during pre-training. As world knowledge evolves, effective deployment increasingly depends on their ability to faithfully follow externally retrieved context. When such evidence conflicts with the model's internal knowledge, LLMs often default to memorized facts, producing unfaithful outputs. In this work, we introduce ContextFocus, a lightweight activation steering approach that improves context faithfulness in such knowledge-conflict settings while preserving fluency and efficiency. Unlike prior approaches, our solution requires no model finetuning and incurs minimal inference-time overhead, making it highly efficient. We evaluate ContextFocus on the ConFiQA benchmark, comparing it against strong baselines including ContextDPO, COIECD, and prompting-based methods. Furthermore, we show that our method is complementary to prompting strategies and remains effective on larger models. Extensive experiments show that ContextFocus significantly improves contextual-faithfulness. Our results highlight the effectiveness, robustness, and efficiency of ContextFocus in improving contextual-faithfulness of LLM outputs.

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Computation and Language