Score: 0

Context Tuning for In-Context Optimization

Published: July 6, 2025 | arXiv ID: 2507.04221v1

By: Jack Lu , Ryan Teehan , Zhenbang Yang and more

Potential Business Impact:

Teaches computers to learn from examples faster.

Business Areas:
Semantic Search Internet Services

We introduce Context Tuning, a simple and effective method to significantly enhance few-shot adaptation of language models (LLMs) without fine-tuning model parameters. While prompt-based adaptation techniques have demonstrated the effectiveness of lightweight adaptation methods for large language models (LLMs), they typically initialize a trainable prompt or prefix with irrelevant tokens for the task at hand. In contrast, Context Tuning initializes the trainable prompt or prefix with task-specific demonstration examples, leveraging the model's inherent In-Context Learning (ICL) ability to extract relevant information for improved few-shot learning performance. Extensive evaluations on benchmarks such as CrossFit, UnifiedQA, MMLU, BIG-Bench Hard, and ARC demonstrate that Context Tuning outperforms traditional prompt-based adaptation methods and achieves competitive accuracy to Test-Time Training with significantly higher training efficiency.

Country of Origin
🇺🇸 United States

Page Count
22 pages

Category
Computer Science:
Computation and Language