MeanFlow-Accelerated Multimodal Video-to-Audio Synthesis via One-Step Generation
By: Xiaoran Yang , Jianxuan Yang , Xinyue Guo and more
Potential Business Impact:
Makes silent videos talk in one step.
A key challenge in synthesizing audios from silent videos is the inherent trade-off between synthesis quality and inference efficiency in existing methods. For instance, flow matching based models rely on modeling instantaneous velocity, inherently require an iterative sampling process, leading to slow inference speeds. To address this efficiency bottleneck, we introduce a MeanFlow-accelerated model that characterizes flow fields using average velocity, enabling one-step generation and thereby significantly accelerating multimodal video-to-audio (VTA) synthesis while preserving audio quality, semantic alignment, and temporal synchronization. Furthermore, a scalar rescaling mechanism is employed to balance conditional and unconditional predictions when classifier-free guidance (CFG) is applied, effectively mitigating CFG-induced distortions in one step generation. Since the audio synthesis network is jointly trained with multimodal conditions, we further evaluate it on text-to-audio (TTA) synthesis task. Experimental results demonstrate that incorporating MeanFlow into the network significantly improves inference speed without compromising perceptual quality on both VTA and TTA synthesis tasks.
Similar Papers
MeanAudio: Fast and Faithful Text-to-Audio Generation with Mean Flows
Sound
Makes computers create sound from words super fast.
IntMeanFlow: Few-step Speech Generation with Integral Velocity Distillation
Sound
Makes computer voices sound real, faster.
MeanFlowSE: one-step generative speech enhancement via conditional mean flow
Sound
Makes noisy voices clear in one step.