Constructive Circuit Amplification: Improving Math Reasoning in LLMs via Targeted Sub-Network Updates
By: Nikhil Prakash , Donghao Ren , Dominik Moritz and more
Potential Business Impact:
Makes AI better at math without changing it much.
Prior studies investigating the internal workings of LLMs have uncovered sparse subnetworks, often referred to as circuits, that are responsible for performing specific tasks. Additionally, it has been shown that model performance improvement through fine-tuning often results from the strengthening of existing circuits in the model. Taken together, these findings suggest the possibility of intervening directly on such circuits to make precise, task-targeted updates. Motivated by these findings, we propose a novel method called Constructive Circuit Amplification which identifies pivotal tokens from model reasoning traces as well as model components responsible for the desired task, and updates only those components. Applied to mathematical reasoning, it improves accuracy by up to +11.4% across multiple models while modifying as little as 1.59% of model components, with minimal impact on other abilities as measured by MMLU, TriviaQA, and TruthfulQA. These results demonstrate that targeted capabilities can be reliably enhanced by selectively updating a sparse set of model components.
Similar Papers
LLMs for Analog Circuit Design Continuum (ACDC)
Machine Learning (CS)
Helps computers design circuits, but they make mistakes.
CircuitSeer: Mining High-Quality Data by Probing Mathematical Reasoning Circuits in LLMs
Artificial Intelligence
Finds smart ways to teach computers faster.
AbstRaL: Augmenting LLMs' Reasoning by Reinforcing Abstract Thinking
Computation and Language
Teaches computers to think smarter, not just memorize.