Score: 1

How LLMs Learn: Tracing Internal Representations with Sparse Autoencoders

Published: March 9, 2025 | arXiv ID: 2503.06394v1

By: Tatsuro Inaba , Kentaro Inui , Yusuke Miyao and more

Potential Business Impact:

Helps computers learn languages and ideas better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) demonstrate remarkable multilingual capabilities and broad knowledge. However, the internal mechanisms underlying the development of these capabilities remain poorly understood. To investigate this, we analyze how the information encoded in LLMs' internal representations evolves during the training process. Specifically, we train sparse autoencoders at multiple checkpoints of the model and systematically compare the interpretative results across these stages. Our findings suggest that LLMs initially acquire language-specific knowledge independently, followed by cross-linguistic correspondences. Moreover, we observe that after mastering token-level knowledge, the model transitions to learning higher-level, abstract concepts, indicating the development of more conceptual understanding.

Repos / Data Links

Page Count
7 pages

Category
Computer Science:
Computation and Language