NanoSD: Edge Efficient Foundation Model for Real Time Image Restoration
By: Subhajit Sanyal , Srinivas Soumitri Miriyala , Akshay Janardan Bankar and more
Latent diffusion models such as Stable Diffusion 1.5 offer strong generative priors that are highly valuable for image restoration, yet their full pipelines remain too computationally heavy for deployment on edge devices. Existing lightweight variants predominantly compress the denoising U-Net or reduce the diffusion trajectory, which disrupts the underlying latent manifold and limits generalization beyond a single task. We introduce NanoSD, a family of Pareto-optimal diffusion foundation models distilled from Stable Diffusion 1.5 through network surgery, feature-wise generative distillation, and structured architectural scaling jointly applied to the U-Net and the VAE encoder-decoder. This full-pipeline co-design preserves the generative prior while producing models that occupy distinct operating points along the accuracy-latency-size frontier (e.g., 130M-315M parameters, achieving real-time inference down to 20ms on mobile-class NPUs). We show that parameter reduction alone does not correlate with hardware efficiency, and we provide an analysis revealing how architectural balance, feature routing, and latent-space preservation jointly shape true on-device latency. When used as a drop-in backbone, NanoSD enables state-of-the-art performance across image super-resolution, image deblurring, face restoration, and monocular depth estimation, outperforming prior lightweight diffusion models in both perceptual quality and practical deployability. NanoSD establishes a general-purpose diffusion foundation model family suitable for real-time visual generation and restoration on edge devices.
Similar Papers
SD-Acc: Accelerating Stable Diffusion through Phase-aware Sampling and Hardware Co-Optimizations
Hardware Architecture
Makes AI art generators faster and use less power.
TinySR: Pruning Diffusion for Real-World Image Super-Resolution
CV and Pattern Recognition
Makes blurry pictures sharp, super fast.
Diffusion As Self-Distillation: End-to-End Latent Diffusion In One Model
CV and Pattern Recognition
Makes AI create pictures faster and better.