PAVAS: Physics-Aware Video-to-Audio Synthesis
By: Oh Hyun-Bin , Yuhta Takida , Toshimitsu Uesaka and more
Recent advances in Video-to-Audio (V2A) generation have achieved impressive perceptual quality and temporal synchronization, yet most models remain appearance-driven, capturing visual-acoustic correlations without considering the physical factors that shape real-world sounds. We present Physics-Aware Video-to-Audio Synthesis (PAVAS), a method that incorporates physical reasoning into a latent diffusion-based V2A generation through the Physics-Driven Audio Adapter (Phy-Adapter). The adapter receives object-level physical parameters estimated by the Physical Parameter Estimator (PPE), which uses a Vision-Language Model (VLM) to infer the moving-object mass and a segmentation-based dynamic 3D reconstruction module to recover its motion trajectory for velocity computation. These physical cues enable the model to synthesize sounds that reflect underlying physical factors. To assess physical realism, we curate VGG-Impact, a benchmark focusing on object-object interactions, and introduce Audio-Physics Correlation Coefficient (APCC), an evaluation metric that measures consistency between physical and auditory attributes. Comprehensive experiments show that PAVAS produces physically plausible and perceptually coherent audio, outperforming existing V2A models in both quantitative and qualitative evaluations. Visit https://physics-aware-video-to-audio-synthesis.github.io for demo videos.
Similar Papers
SpA2V: Harnessing Spatial Auditory Cues for Audio-driven Spatially-aware Video Generation
Graphics
Turns sounds into videos matching noise locations
Training-Free Multimodal Guidance for Video to Audio Generation
Machine Learning (CS)
Makes silent videos talk with realistic sounds.
A Unit Enhancement and Guidance Framework for Audio-Driven Avatar Video Generation
CV and Pattern Recognition
Makes cartoon characters move like real people.