Score: 0

HiCache: Training-free Acceleration of Diffusion Models via Hermite Polynomial-based Feature Caching

Published: August 23, 2025 | arXiv ID: 2508.16984v1

By: Liang Feng , Shikang Zheng , Jiacheng Liu and more

Potential Business Impact:

Makes AI art and video creation much faster.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Diffusion models have achieved remarkable success in content generation but suffer from prohibitive computational costs due to iterative sampling. While recent feature caching methods tend to accelerate inference through temporal extrapolation, these methods still suffer from server quality loss due to the failure in modeling the complex dynamics of feature evolution. To solve this problem, this paper presents HiCache, a training-free acceleration framework that fundamentally improves feature prediction by aligning mathematical tools with empirical properties. Our key insight is that feature derivative approximations in Diffusion Transformers exhibit multivariate Gaussian characteristics, motivating the use of Hermite polynomials-the potentially theoretically optimal basis for Gaussian-correlated processes. Besides, We further introduce a dual-scaling mechanism that ensures numerical stability while preserving predictive accuracy. Extensive experiments demonstrate HiCache's superiority: achieving 6.24x speedup on FLUX.1-dev while exceeding baseline quality, maintaining strong performance across text-to-image, video generation, and super-resolution tasks. Core implementation is provided in the appendix, with complete code to be released upon acceptance.

Page Count
17 pages

Category
Computer Science:
CV and Pattern Recognition