Score: 2

CG-TTRL: Context-Guided Test-Time Reinforcement Learning for On-Device Large Language Models

Published: November 9, 2025 | arXiv ID: 2511.06430v1

By: Peyman Hosseini , Ondrej Bohdal , Taha Ceritli and more

BigTech Affiliations: Samsung

Potential Business Impact:

Helps computers learn better and faster.

Business Areas:
Semantic Search Internet Services

Test-time Reinforcement Learning (TTRL) has shown promise in adapting foundation models for complex tasks at test-time, resulting in large performance improvements. TTRL leverages an elegant two-phase sampling strategy: first, multi-sampling derives a pseudo-label via majority voting, while subsequent downsampling and reward-based fine-tuning encourages the model to explore and learn diverse valid solutions, with the pseudo-label modulating the reward signal. Meanwhile, in-context learning has been widely explored at inference time and demonstrated the ability to enhance model performance without weight updates. However, TTRL's two-phase sampling strategy under-utilizes contextual guidance, which can potentially improve pseudo-label accuracy in the initial exploitation phase while regulating exploration in the second. To address this, we propose context-guided TTRL (CG-TTRL), integrating context dynamically into both sampling phases and propose a method for efficient context selection for on-device applications. Our evaluations on mathematical and scientific QA benchmarks show CG-TTRL outperforms TTRL (e.g. additional 7% relative accuracy improvement over TTRL), while boosting efficiency by obtaining strong performance after only a few steps of test-time training (e.g. 8% relative improvement rather than 1% over TTRL after 3 steps).

Country of Origin
🇬🇧 🇰🇷 South Korea, United Kingdom

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)