Deep learning for pedestrians: backpropagation in Transformers
By: Laurent Boué
This document is a follow-up to our previous paper dedicated to a vectorized derivation of backpropagation in CNNs. Following the same principles and notations already put in place there, we now focus on transformer-based next-token-prediction architectures. To this end, we apply our lightweight index-free methodology to new types of layers such as embedding, multi-headed self-attention and layer normalization. In addition, we also provide gradient expressions for LoRA layers to illustrate parameter-efficient fine-tuning. Why bother doing manual backpropagation when there are so many tools that do this automatically? Any gap in understanding of how values propagate forward will become evident when attempting to differentiate the loss function. By working through the backward pass manually, we gain a deeper intuition for how each operation influences the final output. A complete PyTorch implementation of a minimalistic GPT-like network is also provided along with analytical expressions for of all of its gradient updates.
Similar Papers
Towards Understanding Transformers in Learning Random Walks
Machine Learning (CS)
Shows how computers learn to predict movement.
A Mathematical Explanation of Transformers for Large Language Models and GPTs
Machine Learning (CS)
Explains how AI learns by seeing patterns.
Universal Approximation Theorem for a Single-Layer Transformer
Machine Learning (CS)
Proves Transformers can learn any pattern.