Towards Theoretical Understanding of Transformer Test-Time Computing: Investigation on In-Context Linear Regression
By: Xingwu Chen , Miao Lu , Beining Wu and more
Potential Business Impact:
Makes computer writing smarter by trying many ideas.
Using more test-time computation during language model inference, such as generating more intermediate thoughts or sampling multiple candidate answers, has proven effective in significantly improving model performance. This paper takes an initial step toward bridging the gap between practical language model inference and theoretical transformer analysis by incorporating randomness and sampling. We focus on in-context linear regression with continuous/binary coefficients, where our framework simulates language model decoding through noise injection and binary coefficient sampling. Through this framework, we provide detailed analyses of widely adopted inference techniques. Supported by empirical results, our theoretical framework and analysis demonstrate the potential for offering new insights into understanding inference behaviors in real-world language models.
Similar Papers
Towards Theoretical Understanding of Transformer Test-Time Computing: Investigation on In-Context Linear Regression
Machine Learning (CS)
Makes AI think more to give better answers.
Understanding the Role of Training Data in Test-Time Scaling
Artificial Intelligence
Helps AI solve harder problems by thinking more.
Why Do Transformers Fail to Forecast Time Series In-Context?
Machine Learning (CS)
Makes computers predict future events more accurately.