HierarchicalPrune: Position-Aware Compression for Large-Scale Diffusion Models
By: Young D. Kwon , Rui Li , Sijia Li and more
Potential Business Impact:
Makes big AI art programs run on phones.
State-of-the-art text-to-image diffusion models (DMs) achieve remarkable quality, yet their massive parameter scale (8-11B) poses significant challenges for inferences on resource-constrained devices. In this paper, we present HierarchicalPrune, a novel compression framework grounded in a key observation: DM blocks exhibit distinct functional hierarchies, where early blocks establish semantic structures while later blocks handle texture refinements. HierarchicalPrune synergistically combines three techniques: (1) Hierarchical Position Pruning, which identifies and removes less essential later blocks based on position hierarchy; (2) Positional Weight Preservation, which systematically protects early model portions that are essential for semantic structural integrity; and (3) Sensitivity-Guided Distillation, which adjusts knowledge-transfer intensity based on our discovery of block-wise sensitivity variations. As a result, our framework brings billion-scale diffusion models into a range more suitable for on-device inference, while preserving the quality of the output images. Specifically, when combined with INT4 weight quantisation, HierarchicalPrune achieves 77.5-80.4% memory footprint reduction (e.g., from 15.8 GB to 3.2 GB) and 27.9-38.0% latency reduction, measured on server and consumer grade GPUs, with the minimum drop of 2.6% in GenEval score and 7% in HPSv2 score compared to the original model. Last but not least, our comprehensive user study with 85 participants demonstrates that HierarchicalPrune maintains perceptual quality comparable to the original model while significantly outperforming prior works.
Similar Papers
Which Layer Causes Distribution Deviation? Entropy-Guided Adaptive Pruning for Diffusion and Flow Models
CV and Pattern Recognition
Makes AI art generators faster and smaller.
Towards Efficient VLMs: Information-Theoretic Driven Compression via Adaptive Structural Pruning
CV and Pattern Recognition
Makes AI models smaller and faster.
PruneX: A Hierarchical Communication-Efficient System for Distributed CNN Training with Structured Pruning
Distributed, Parallel, and Cluster Computing
Makes AI training faster by cutting data sent between computers.