Accelerating Controllable Generation via Hybrid-grained Cache
By: Lin Liu , Huixia Ben , Shuo Wang and more
Potential Business Impact:
Makes AI image creation much faster.
Controllable generative models have been widely used to improve the realism of synthetic visual content. However, such models must handle control conditions and content generation computational requirements, resulting in generally low generation efficiency. To address this issue, we propose a Hybrid-Grained Cache (HGC) approach that reduces computational overhead by adopting cache strategies with different granularities at different computational stages. Specifically, (1) we use a coarse-grained cache (block-level) based on feature reuse to dynamically bypass redundant computations in encoder-decoder blocks between each step of model reasoning. (2) We design a fine-grained cache (prompt-level) that acts within a module, where the fine-grained cache reuses cross-attention maps within consecutive reasoning steps and extends them to the corresponding module computations of adjacent steps. These caches of different granularities can be seamlessly integrated into each computational link of the controllable generation process. We verify the effectiveness of HGC on four benchmark datasets, especially its advantages in balancing generation efficiency and visual quality. For example, on the COCO-Stuff segmentation benchmark, our HGC significantly reduces the computational cost (MACs) by 63% (from 18.22T to 6.70T), while keeping the loss of semantic fidelity (quantized performance degradation) within 1.5%.
Similar Papers
GRACE: Designing Generative Face Video Codec via Agile Hardware-Centric Workflow
CV and Pattern Recognition
Makes talking videos work on small, low-power gadgets.
Controllable Video Generation: A Survey
Graphics
Makes AI videos match your exact ideas.
Dynamic Granularity Matters: Rethinking Vision Transformers Beyond Fixed Patch Splitting
CV and Pattern Recognition
Makes computer vision see details better, faster.