Score: 0

FastForward Pruning: Efficient LLM Pruning via Single-Step Reinforcement Learning

Published: November 24, 2025 | arXiv ID: 2511.18977v1

By: Xin Yuan , Siqi Li , Jiateng Wei and more

Potential Business Impact:

Makes AI models smaller and faster to train.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Pruning is an effective method for compressing Large Language Models, but finding an optimal, non-uniform layer-wise sparsity allocation remains a key challenge. While heuristic methods are fast but yield suboptimal performance, more powerful search-based approaches like Reinforcement Learning are often hindered by prohibitive computational costs on large-scale models. To overcome this efficiency barrier, we propose FastForward Pruning. Its core is a decoupled, single-step RL framework that separates policy optimization from the complex budget satisfaction problem. Such a decoupling is crucial for efficiently searching the vast policy space of LLMs. This curriculum-based strategy begins with low-cost, simple tasks and gradually increases in complexity, significantly reducing the search's computational overhead. Evaluated on the LLaMA, Mistral, and OPT model families, our framework discovers pruning policies that achieve superior performance over strong heuristic baselines. Crucially, when compared to other search-based algorithms, our method achieves competitive or superior results at a fraction of the computational cost, demonstrating a clear advantage in search efficiency.

Country of Origin
🇨🇳 China

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)