AVATAR: Reinforcement Learning to See, Hear, and Reason Over Video
By: Yogesh Kulkarni, Pooyan Fazli
Potential Business Impact:
Helps robots understand videos by watching and listening.
Multimodal reasoning over long-horizon video is challenging due to the need for precise spatiotemporal fusion and alignment across modalities. While recent methods such as Group Relative Policy Optimization (GRPO) have shown promise in this domain, they suffer from three key limitations: (1) data inefficiency from their on-policy design, (2) a vanishing advantage problem, where identical or near-identical rewards within a group eliminate the learning signal by producing zero-valued advantages, and (3) uniform credit assignment that fails to emphasize critical reasoning steps. We introduce AVATAR (Audio-Video Agent for Alignment and Reasoning), a framework that addresses these limitations through two core components: (1) an off-policy training architecture that improves sample efficiency and resolves vanishing advantages by reusing past experiences with greater reward diversity, and (2) Temporal Advantage Shaping (TAS), a novel credit assignment strategy that upweights key reasoning phases during learning. AVATAR achieves strong performance across various benchmarks, outperforming the Qwen2.5-Omni baseline by +5.4on MMVU, +4.9 on OmniBench, and +4.5 on Video-Holmes, while demonstrating over 35% higher sample efficiency.
Similar Papers
AVATAAR: Agentic Video Answering via Temporal Adaptive Alignment and Reasoning
CV and Pattern Recognition
Helps computers understand long videos better.
Video-R2: Reinforcing Consistent and Grounded Reasoning in Multimodal Language Models
CV and Pattern Recognition
Helps computers understand videos by watching carefully.
AURORA: Augmented Understanding via Structured Reasoning and Reinforcement Learning for Reference Audio-Visual Segmentation
CV and Pattern Recognition
Finds sounds by seeing, hearing, and reading.