Score: 1

PianoVAM: A Multimodal Piano Performance Dataset

Published: September 10, 2025 | arXiv ID: 2509.08800v1

By: Yonghyun Kim , Junhyung Park , Joonhyung Bae and more

Potential Business Impact:

Helps computers learn to play piano by watching.

Business Areas:
Motion Capture Media and Entertainment, Video

The multimodal nature of music performance has driven increasing interest in data beyond the audio domain within the music information retrieval (MIR) community. This paper introduces PianoVAM, a comprehensive piano performance dataset that includes videos, audio, MIDI, hand landmarks, fingering labels, and rich metadata. The dataset was recorded using a Disklavier piano, capturing audio and MIDI from amateur pianists during their daily practice sessions, alongside synchronized top-view videos in realistic and varied performance conditions. Hand landmarks and fingering labels were extracted using a pretrained hand pose estimation model and a semi-automated fingering annotation algorithm. We discuss the challenges encountered during data collection and the alignment process across different modalities. Additionally, we describe our fingering annotation method based on hand landmarks extracted from videos. Finally, we present benchmarking results for both audio-only and audio-visual piano transcription using the PianoVAM dataset and discuss additional potential applications.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡°πŸ‡· Korea, Republic of, United States

Page Count
8 pages

Category
Computer Science:
Sound