Score: 0

From Indirect Object Identification to Syllogisms: Exploring Binary Mechanisms in Transformer Circuits

Published: August 22, 2025 | arXiv ID: 2508.16109v1

By: Karim Saraipour, Shichang Zhang

Potential Business Impact:

Shows how computers understand and reason with logic.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Transformer-based language models (LMs) can perform a wide range of tasks, and mechanistic interpretability (MI) aims to reverse engineer the components responsible for task completion to understand their behavior. Previous MI research has focused on linguistic tasks such as Indirect Object Identification (IOI). In this paper, we investigate the ability of GPT-2 small to handle binary truth values by analyzing its behavior with syllogistic prompts, e.g., "Statement A is true. Statement B matches statement A. Statement B is", which requires more complex logical reasoning compared to IOI. Through our analysis of several syllogism tasks of varying difficulty, we identify multiple circuits that mechanistically explain GPT-2's logical-reasoning capabilities and uncover binary mechanisms that facilitate task completion, including the ability to produce a negated token not present in the input prompt through negative heads. Our evaluation using a faithfulness metric shows that a circuit comprising five attention heads achieves over 90% of the original model's performance. By relating our findings to IOI analysis, we provide new insights into the roles of specific attention heads and MLPs in LMs. These insights contribute to a broader understanding of model reasoning and support future research in mechanistic interpretability.

Country of Origin
🇺🇸 United States

Page Count
17 pages

Category
Computer Science:
Computation and Language