Ladder Up, Memory Down: Low-Cost Fine-Tuning With Side Nets
By: Estelle Zheng , Nathan Cerisara , Sébastien Warichet and more
Potential Business Impact:
Lets AI learn more on less computer memory.
Fine-tuning large language models (LLMs) is often limited by the memory available on commodity GPUs. Parameter-efficient fine-tuning (PEFT) methods such as QLoRA reduce the number of trainable parameters, yet still incur high memory usage induced by the backward pass in the full model. We revisit Ladder Side Tuning (LST), a rarely explored PEFT technique that adds a lightweight side network, and show that it matches QLoRA's compute scaling slope while cutting peak memory by 50\%. Across different downstream benchmarks spanning natural language understanding, mathematical and LLM-critic tasks, LST has competitive performance with QLoRA's accuracy on average while being much more memory-efficient. This efficiency enables fine-tuning of 7B-parameter models on a single 12 GB consumer GPU with 2k-token contexts, requiring no gradient checkpointing\textemdash conditions under which QLoRA exhausts memory. Beyond memory efficiency, we also establish scaling laws showing that LST scales similarly to QLoRA. We exploit Ladder's architectural flexibility by introducing xLadder, a depth-extended variant that increases effective depth via cross-connections and shortens chain-of-thought (CoT) at fixed parameter count. Ladder is strong when memory is the bottleneck; xLadder builds on this by enabling deeper reasoning without additional memory overhead.
Similar Papers
LoRAFusion: Efficient LoRA Fine-Tuning for LLMs
Machine Learning (CS)
Makes AI learn faster and use less power.
HyperAdaLoRA: Accelerating LoRA Rank Allocation During Training via Hypernetworks without Sacrificing Performance
Machine Learning (CS)
Makes AI learn faster without needing more power.
How Can Quantum Deep Learning Improve Large Language Models?
Quantum Physics
Makes AI learn new things much faster and cheaper.