Score: 2

Towards Theoretical Understanding of Transformer Test-Time Computing: Investigation on In-Context Linear Regression

Published: August 11, 2025 | arXiv ID: 2508.07571v2

By: Xingwu Chen , Miao Lu , Beining Wu and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Makes computer writing smarter by trying many ideas.

Using more test-time computation during language model inference, such as generating more intermediate thoughts or sampling multiple candidate answers, has proven effective in significantly improving model performance. This paper takes an initial step toward bridging the gap between practical language model inference and theoretical transformer analysis by incorporating randomness and sampling. We focus on in-context linear regression with continuous/binary coefficients, where our framework simulates language model decoding through noise injection and binary coefficient sampling. Through this framework, we provide detailed analyses of widely adopted inference techniques. Supported by empirical results, our theoretical framework and analysis demonstrate the potential for offering new insights into understanding inference behaviors in real-world language models.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡­πŸ‡° Hong Kong, United States

Page Count
41 pages

Category
Computer Science:
Machine Learning (CS)