Interpreting the Latent Structure of Operator Precedence in Language Models
By: Dharunish Yugeswardeenoo , Harshil Nukala , Cole Blondin and more
Potential Business Impact:
Teaches computers math rules for better answers.
Large Language Models (LLMs) have demonstrated impressive reasoning capabilities but continue to struggle with arithmetic tasks. Prior works largely focus on outputs or prompting strategies, leaving the open question of the internal structure through which models do arithmetic computation. In this work, we investigate whether LLMs encode operator precedence in their internal representations via the open-source instruction-tuned LLaMA 3.2-3B model. We constructed a dataset of arithmetic expressions with three operands and two operators, varying the order and placement of parentheses. Using this dataset, we trace whether intermediate results appear in the residual stream of the instruction-tuned LLaMA 3.2-3B model. We apply interpretability techniques such as logit lens, linear classification probes, and UMAP geometric visualization. Our results show that intermediate computations are present in the residual stream, particularly after MLP blocks. We also find that the model linearly encodes precedence in each operator's embeddings post attention layer. We introduce partial embedding swap, a technique that modifies operator precedence by exchanging high-impact embedding dimensions between operators.
Similar Papers
Modular Arithmetic: Language Models Solve Math Digit by Digit
Computation and Language
Helps computers do math like humans.
Interpretability Framework for LLMs in Undergraduate Calculus
Computers and Society
Checks math answers by understanding how they're solved.
Implicit Reasoning in Large Language Models: A Comprehensive Survey
Computation and Language
Lets computers think faster without showing steps.