Mechanistic Interpretability of GPT-2: Lexical and Contextual Layers in Sentiment Analysis
By: Amartya Hatua
Potential Business Impact:
Shows how computers understand feelings in words.
We present a mechanistic interpretability study of GPT-2 that causally examines how sentiment information is processed across its transformer layers. Using systematic activation patching across all 12 layers, we test the hypothesized two-stage sentiment architecture comprising early lexical detection and mid-layer contextual integration. Our experiments confirm that early layers (0-3) act as lexical sentiment detectors, encoding stable, position specific polarity signals that are largely independent of context. However, all three contextual integration hypotheses: Middle Layer Concentration, Phenomenon Specificity, and Distributed Processing are falsified. Instead of mid-layer specialization, we find that contextual phenomena such as negation, sarcasm, domain shifts etc. are integrated primarily in late layers (8-11) through a unified, non-modular mechanism. These experimental findings provide causal evidence that GPT-2's sentiment computation differs from the predicted hierarchical pattern, highlighting the need for further empirical characterization of contextual integration in large language models.
Similar Papers
Three Stage Narrative Analysis; Plot-Sentiment Breakdown, Structure Learning and Concept Detection
Computation and Language
Helps pick movies by understanding story feelings.
TermGPT: Multi-Level Contrastive Fine-Tuning for Terminology Adaptation in Legal and Financial Domain
Computation and Language
Teaches computers to understand special words better.
The Promise and Peril of Generative AI: Evidence from GPT as Sell-Side Analysts
General Finance
Helps people know when to trust AI money predictions.