SkillFormer: Unified Multi-View Video Understanding for Proficiency Estimation
By: Edoardo Bianchi, Antonio Liotta
Potential Business Impact:
Helps computers judge how good someone is at tasks.
Assessing human skill levels in complex activities is a challenging problem with applications in sports, rehabilitation, and training. In this work, we present SkillFormer, a parameter-efficient architecture for unified multi-view proficiency estimation from egocentric and exocentric videos. Building on the TimeSformer backbone, SkillFormer introduces a CrossViewFusion module that fuses view-specific features using multi-head cross-attention, learnable gating, and adaptive self-calibration. We leverage Low-Rank Adaptation to fine-tune only a small subset of parameters, significantly reducing training costs. In fact, when evaluated on the EgoExo4D dataset, SkillFormer achieves state-of-the-art accuracy in multi-view settings while demonstrating remarkable computational efficiency, using 4.5x fewer parameters and requiring 3.75x fewer training epochs than prior baselines. It excels in multiple structured tasks, confirming the value of multi-view integration for fine-grained skill assessment.
Similar Papers
CuriosAI Submission to the EgoExo4D Proficiency Estimation Challenge 2025
CV and Pattern Recognition
Helps robots learn skills by watching people.
EgoExoBench: A Benchmark for First- and Third-person View Video Understanding in MLLMs
CV and Pattern Recognition
Teaches computers to understand different viewpoints.
MVAFormer: RGB-based Multi-View Spatio-Temporal Action Recognition with Transformer
CV and Pattern Recognition
Helps computers see actions from many angles.