FastForward Pruning: Efficient LLM Pruning via Single-Step Reinforcement Learning
By: Xin Yuan , Siqi Li , Jiateng Wei and more
Potential Business Impact:
Makes AI models smaller and faster to train.
Pruning is an effective method for compressing Large Language Models, but finding an optimal, non-uniform layer-wise sparsity allocation remains a key challenge. While heuristic methods are fast but yield suboptimal performance, more powerful search-based approaches like Reinforcement Learning are often hindered by prohibitive computational costs on large-scale models. To overcome this efficiency barrier, we propose FastForward Pruning. Its core is a decoupled, single-step RL framework that separates policy optimization from the complex budget satisfaction problem. Such a decoupling is crucial for efficiently searching the vast policy space of LLMs. This curriculum-based strategy begins with low-cost, simple tasks and gradually increases in complexity, significantly reducing the search's computational overhead. Evaluated on the LLaMA, Mistral, and OPT model families, our framework discovers pruning policies that achieve superior performance over strong heuristic baselines. Crucially, when compared to other search-based algorithms, our method achieves competitive or superior results at a fraction of the computational cost, demonstrating a clear advantage in search efficiency.
Similar Papers
Don't Be Greedy, Just Relax! Pruning LLMs via Frank-Wolfe
Machine Learning (CS)
Makes big computer brains smaller and faster.
Beyond Manually Designed Pruning Policies with Second-Level Performance Prediction: A Pruning Framework for LLMs
Machine Learning (CS)
Makes big computer brains smaller, faster, and smarter.
Beyond Manually Designed Pruning Policies with Second-Level Performance Prediction: A Pruning Framework for LLMs
Machine Learning (CS)
Speeds up AI while keeping it smart.