Delta-SVD: Efficient Compression for Personalized Text-to-Image Models
By: Tangyuan Zhang , Shangyu Chen , Qixiang Chen and more
Potential Business Impact:
Shrinks AI art models to save space.
Personalized text-to-image models such as DreamBooth require fine-tuning large-scale diffusion backbones, resulting in significant storage overhead when maintaining many subject-specific models. We present Delta-SVD, a post-hoc, training-free compression method that targets the parameter weights update induced by DreamBooth fine-tuning. Our key observation is that these delta weights exhibit strong low-rank structure due to the sparse and localized nature of personalization. Delta-SVD first applies Singular Value Decomposition (SVD) to factorize the weight deltas, followed by an energy-based rank truncation strategy to balance compression efficiency and reconstruction fidelity. The resulting compressed models are fully plug-and-play and can be re-constructed on-the-fly during inference. Notably, the proposed approach is simple, efficient, and preserves the original model architecture. Experiments on a multiple subject dataset demonstrate that Delta-SVD achieves substantial compression with negligible loss in generation quality measured by CLIP score, SSIM and FID. Our method enables scalable and efficient deployment of personalized diffusion models, making it a practical solution for real-world applications that require storing and deploying large-scale subject customizations.
Similar Papers
FlashSVD: Memory-Efficient Inference with Streaming for Low-Rank Models
Machine Learning (CS)
Makes big AI models fit on phones.
CPSVD: Enhancing Large Language Model Compression via Column-Preserving Singular Value Decomposition
Machine Learning (CS)
Makes big AI models smaller without losing smarts.
Low-Rank Prehab: Preparing Neural Networks for SVD Compression
Machine Learning (CS)
Prepares AI to shrink without losing smarts.