Score: 2

Z-Pruner: Post-Training Pruning of Large Language Models for Efficiency without Retraining

Published: August 18, 2025 | arXiv ID: 2508.15828v1

By: Samiul Basir Bhuiyan , Md. Sazzad Hossain Adib , Mohammed Aman Bhuiyan and more

Potential Business Impact:

Makes big AI models smaller, faster, and smarter.

Large language models (LLMs) have rapidly advanced in recent years, achieving remarkable performance across a wide range of natural language processing tasks. However, this progress has come at the cost of increasingly large model sizes, which pose significant challenges for deployment, scalability, and energy efficiency. To address these limitations, post-training pruning has emerged as a promising approach for reducing model size and inference latency without the need for retraining. Despite these advantages, many existing pruning methods result in substantial performance degradation or require computationally expensive fine-tuning. In this work, we introduce Z-Pruner, a novel post-training pruning method designed to induce sparsity in pretrained LLMs without any retraining. Unlike conventional approaches, Z-Pruner leverages both weight update magnitudes and activation patterns to identify and eliminate redundant parameters more effectively. Our method is model-agnostic, efficient, and easy to implement. We evaluate Z-Pruner using multiple widely-used LLM architectures, including LLaMA-2, LLaMA-3, and OPT, across a diverse set of standard language benchmarks. Experimental results demonstrate that Z-Pruner surpasses state-of-the-art pruning methods that require intensive weight updates. Specifically, Z-Pruner achieves the lowest perplexity scores and the highest overall average score for zero-shot accuracy. We have made the corresponding codes publicly available at https://github.com/sazzadadib/Z-Pruner.

Country of Origin
🇧🇩 Bangladesh

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)