Score: 0

Exploiting Contextual Knowledge in LLMs through V-usable Information based Layer Enhancement

Published: April 22, 2025 | arXiv ID: 2504.15630v1

By: Xiaowei Yuan , Zhao Yang , Ziyang Huang and more

Potential Business Impact:

Helps computers remember and use information better.

Business Areas:
Semantic Search Internet Services

Large Language Models (LLMs) have demonstrated remarkable capabilities in various tasks, yet they often struggle with context-faithfulness generations that properly reflect contextual knowledge. While existing approaches focus on enhancing the decoding strategies, they ignore the fundamental mechanism of how contextual information is processed within LLMs' internal states. As a result, LLMs remain limited in their ability to fully leverage contextual knowledge. In this paper, we propose Context-aware Layer Enhancement (CaLE), a novel intervention method that enhances the utilization of contextual knowledge within LLMs' internal representations. By employing V-usable information analysis, CaLE strategically amplifies the growth of contextual information at an optimal layer, thereby enriching representations in the final layer. Our experiments demonstrate that CaLE effectively improves context-faithful generation in Question-Answering tasks, particularly in scenarios involving unknown or conflicting contextual knowledge.

Page Count
16 pages

Category
Computer Science:
Computation and Language