Score: 0

Iterative Structured Pruning for Large Language Models with Multi-Domain Calibration

Published: January 6, 2026 | arXiv ID: 2601.02674v1

By: Guangxin Wu , Hao Zhang , Zhang Zhibin and more

Potential Business Impact:

Shrinks big computer brains to work faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) have achieved remarkable success across a wide spectrum of natural language processing tasks. However, their ever-growing scale introduces significant barriers to real-world deployment, including substantial computational overhead, memory footprint, and inference latency. While model pruning presents a viable solution to these challenges, existing unstructured pruning techniques often yield irregular sparsity patterns that necessitate specialized hardware or software support. In this work, we explore structured pruning, which eliminates entire architectural components and maintains compatibility with standard hardware accelerators. We introduce a novel structured pruning framework that leverages a hybrid multi-domain calibration set and an iterative calibration strategy to effectively identify and remove redundant channels. Extensive experiments on various models across diverse downstream tasks show that our approach achieves significant compression with minimal performance degradation.

Page Count
10 pages

Category
Computer Science:
Computation and Language