Neural ODE Transformers: Analyzing Internal Dynamics and Adaptive Fine-tuning
By: Anh Tong , Thanh Nguyen-Tang , Dongeun Lee and more
Potential Business Impact:
Makes AI understand itself better.
Recent advancements in large language models (LLMs) based on transformer architectures have sparked significant interest in understanding their inner workings. In this paper, we introduce a novel approach to modeling transformer architectures using highly flexible non-autonomous neural ordinary differential equations (ODEs). Our proposed model parameterizes all weights of attention and feed-forward blocks through neural networks, expressing these weights as functions of a continuous layer index. Through spectral analysis of the model's dynamics, we uncover an increase in eigenvalue magnitude that challenges the weight-sharing assumption prevalent in existing theoretical studies. We also leverage the Lyapunov exponent to examine token-level sensitivity, enhancing model interpretability. Our neural ODE transformer demonstrates performance comparable to or better than vanilla transformers across various configurations and datasets, while offering flexible fine-tuning capabilities that can adapt to different architectural constraints.
Similar Papers
High-order expansion of Neural Ordinary Differential Equations flows
Optimization and Control
Explains how smart computer models make decisions.
A multilevel approach to accelerate the training of Transformers
Machine Learning (CS)
Makes computer learning models train much faster.
Learning the Simplest Neural ODE
Machine Learning (Stat)
Makes it easier to teach computers about changing things.