Multi-Modal Soccer Scene Analysis with Masked Pre-Training
By: Marc Peral , Guillem Capellera , Luis Ferraz and more
In this work we propose a multi-modal architecture for analyzing soccer scenes from tactical camera footage, with a focus on three core tasks: ball trajectory inference, ball state classification, and ball possessor identification. To this end, our solution integrates three distinct input modalities (player trajectories, player types and image crops of individual players) into a unified framework that processes spatial and temporal dynamics using a cascade of sociotemporal transformer blocks. Unlike prior methods, which rely heavily on accurate ball tracking or handcrafted heuristics, our approach infers the ball trajectory without direct access to its past or future positions, and robustly identifies the ball state and ball possessor under noisy or occluded conditions from real top league matches. We also introduce CropDrop, a modality-specific masking pre-training strategy that prevents over-reliance on image features and encourages the model to rely on cross-modal patterns during pre-training. We show the effectiveness of our approach on a large-scale dataset providing substantial improvements over state-of-the-art baselines in all tasks. Our results highlight the benefits of combining structured and visual cues in a transformer-based architecture, and the importance of realistic masking strategies in multi-modal learning.
Similar Papers
SoccerMaster: A Vision Foundation Model for Soccer Understanding
CV and Pattern Recognition
Helps computers understand soccer games better.
SoccerTrack v2: A Full-Pitch Multi-View Soccer Dataset for Game State Reconstruction
CV and Pattern Recognition
Helps computers understand soccer games better.
CourtMotion: Learning Event-Driven Motion Representations from Skeletal Data for Basketball
CV and Pattern Recognition
Predicts basketball plays by watching player movements.