From Indirect Object Identification to Syllogisms: Exploring Binary Mechanisms in Transformer Circuits
By: Karim Saraipour, Shichang Zhang
Potential Business Impact:
Shows how computers understand and reason with logic.
Transformer-based language models (LMs) can perform a wide range of tasks, and mechanistic interpretability (MI) aims to reverse engineer the components responsible for task completion to understand their behavior. Previous MI research has focused on linguistic tasks such as Indirect Object Identification (IOI). In this paper, we investigate the ability of GPT-2 small to handle binary truth values by analyzing its behavior with syllogistic prompts, e.g., "Statement A is true. Statement B matches statement A. Statement B is", which requires more complex logical reasoning compared to IOI. Through our analysis of several syllogism tasks of varying difficulty, we identify multiple circuits that mechanistically explain GPT-2's logical-reasoning capabilities and uncover binary mechanisms that facilitate task completion, including the ability to produce a negated token not present in the input prompt through negative heads. Our evaluation using a faithfulness metric shows that a circuit comprising five attention heads achieves over 90% of the original model's performance. By relating our findings to IOI analysis, we provide new insights into the roles of specific attention heads and MLPs in LMs. These insights contribute to a broader understanding of model reasoning and support future research in mechanistic interpretability.
Similar Papers
Emergence of Minimal Circuits for Indirect Object Identification in Attention-Only Transformers
Computation and Language
Finds simple "thinking paths" inside AI.
Beyond Components: Singular Vector-Based Interpretability of Transformer Circuits
Machine Learning (CS)
Finds hidden, separate jobs inside AI's brain.
Unsupervised decoding of encoded reasoning using language model interpretability
Artificial Intelligence
Uncovers how AI thinks, even when hidden.