Model Internal Sleuthing: Finding Lexical Identity and Inflectional Morphology in Modern Language Models
By: Michael Li, Nishant Subramani
Potential Business Impact:
Models store word meanings early, grammar later.
Large transformer-based language models dominate modern NLP, yet our understanding of how they encode linguistic information is rooted in studies of early models like BERT and GPT-2. To better understand today's language models, we investigate how 25 models - from classical architectures (BERT, DeBERTa, GPT-2) to modern large language models (Pythia, OLMo-2, Gemma-2, Qwen2.5, Llama-3.1) - represent lexical identity and inflectional morphology across six typologically diverse languages. Using linear and nonlinear classifiers trained on hidden activations, we predict word lemmas and inflectional features layer by layer. We find that models concentrate lexical information linearly in early layers and increasingly nonlinearly in later layers, while keeping inflectional information uniformly accessible and linearly separable throughout. Additional experiments probe the nature of these encodings: attention and residual analyses examine where within layers information can be recovered, steering vector experiments test what information can be functionally manipulated, and intrinsic dimensionality analyses explore how the representational structure evolves across layers. Remarkably, these encoding patterns emerge across all models we test, despite differences in architecture, size, and training regime (pretrained and instruction-tuned variants). This suggests that, even with substantial advances in LLM technologies, transformer models organize linguistic information in similar ways, indicating that these properties are important for next token prediction and are learned early during pretraining. Our code is available at https://github.com/ml5885/model_internal_sleuthing
Similar Papers
On Entity Identification in Language Models
Computation and Language
Helps computers understand who or what is being talked about.
Probing the Vulnerability of Large Language Models to Polysemantic Interventions
Artificial Intelligence
Makes AI models easier to trick or control.
Probing Subphonemes in Morphology Models
Computation and Language
Helps computers learn language rules better.