Score: 1

Efficient LLMs with AMP: Attention Heads and MLP Pruning

Published: April 29, 2025 | arXiv ID: 2504.21174v1

By: Leandro Giusti Mugnaini , Bruno Lopes Yamamoto , Lucas Lauton de Alcantara and more

Potential Business Impact:

Makes smart computer programs run faster and smaller.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Deep learning drives a new wave in computing systems and triggers the automation of increasingly complex problems. In particular, Large Language Models (LLMs) have significantly advanced cognitive tasks, often matching or even surpassing human-level performance. However, their extensive parameters result in high computational costs and slow inference, posing challenges for deployment in resource-limited settings. Among the strategies to overcome the aforementioned challenges, pruning emerges as a successful mechanism since it reduces model size while maintaining predictive ability. In this paper, we introduce AMP: Attention Heads and MLP Pruning, a novel structured pruning method that efficiently compresses LLMs by removing less critical structures within Multi-Head Attention (MHA) and Multilayer Perceptron (MLP). By projecting the input data onto weights, AMP assesses structural importance and overcomes the limitations of existing techniques, which often fall short in flexibility or efficiency. In particular, AMP surpasses the current state-of-the-art on commonsense reasoning tasks by up to 1.49 percentage points, achieving a 30% pruning ratio with minimal impact on zero-shot task performance. Moreover, AMP also improves inference speeds, making it well-suited for deployment in resource-constrained environments. We confirm the flexibility of AMP on different families of LLMs, including LLaMA and Phi.

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)