Score: 5

Neural ODE Transformers: Analyzing Internal Dynamics and Adaptive Fine-tuning

Published: March 3, 2025 | arXiv ID: 2503.01329v2

By: Anh Tong , Thanh Nguyen-Tang , Dongeun Lee and more

BigTech Affiliations: Qualcomm Johns Hopkins University Stanford University

Potential Business Impact:

Makes AI understand itself better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent advancements in large language models (LLMs) based on transformer architectures have sparked significant interest in understanding their inner workings. In this paper, we introduce a novel approach to modeling transformer architectures using highly flexible non-autonomous neural ordinary differential equations (ODEs). Our proposed model parameterizes all weights of attention and feed-forward blocks through neural networks, expressing these weights as functions of a continuous layer index. Through spectral analysis of the model's dynamics, we uncover an increase in eigenvalue magnitude that challenges the weight-sharing assumption prevalent in existing theoretical studies. We also leverage the Lyapunov exponent to examine token-level sensitivity, enhancing model interpretability. Our neural ODE transformer demonstrates performance comparable to or better than vanilla transformers across various configurations and datasets, while offering flexible fine-tuning capabilities that can adapt to different architectural constraints.

Country of Origin
πŸ‡°πŸ‡· πŸ‡ΊπŸ‡Έ Korea, Republic of, United States

Repos / Data Links

Page Count
34 pages

Category
Computer Science:
Machine Learning (CS)