Score: 1

SkillFormer: Unified Multi-View Video Understanding for Proficiency Estimation

Published: May 13, 2025 | arXiv ID: 2505.08665v4

By: Edoardo Bianchi, Antonio Liotta

Potential Business Impact:

Helps computers judge how good someone is at tasks.

Business Areas:
Image Recognition Data and Analytics, Software

Assessing human skill levels in complex activities is a challenging problem with applications in sports, rehabilitation, and training. In this work, we present SkillFormer, a parameter-efficient architecture for unified multi-view proficiency estimation from egocentric and exocentric videos. Building on the TimeSformer backbone, SkillFormer introduces a CrossViewFusion module that fuses view-specific features using multi-head cross-attention, learnable gating, and adaptive self-calibration. We leverage Low-Rank Adaptation to fine-tune only a small subset of parameters, significantly reducing training costs. In fact, when evaluated on the EgoExo4D dataset, SkillFormer achieves state-of-the-art accuracy in multi-view settings while demonstrating remarkable computational efficiency, using 4.5x fewer parameters and requiring 3.75x fewer training epochs than prior baselines. It excels in multiple structured tasks, confirming the value of multi-view integration for fine-grained skill assessment.

Country of Origin
🇮🇹 Italy

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition