Score: 1

Rhythm in the Air: Vision-based Real-Time Music Generation through Gestures

Published: November 2, 2025 | arXiv ID: 2511.00793v1

By: Barathi Subramanian , Rathinaraja Jeyaraj , Anand Paul and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Lets you make music by waving your hands.

Business Areas:
Image Recognition Data and Analytics, Software

Gesture recognition is an essential component of human-computer interaction (HCI), facilitating seamless interconnectivity between users and computer systems without physical touch. This paper introduces an innovative application of vision-based dynamic gesture recognition (VDGR) for real-time music composition through gestures. To implement this application, we generate a custom gesture dataset that encompasses over 15000 samples across 21 classes, incorporating 7 musical notes each manifesting at three distinct pitch levels. To effectively deal with the modest volume of training data and to accurately discern and prioritize complex gesture sequences for music creation, we develop a multi-layer attention-based gated recurrent unit (MLA-GRU) model, in which gated recurrent unit (GRU) is used to learn temporal patterns from the observed sequence and an attention layer is employed to focus on musically pertinent gesture segments. Our empirical studies demonstrate that MLA-GRU significantly surpasses the classical GRU model, achieving a remarkable accuracy of 96.83% compared to the baseline's 86.7%. Moreover, our approach exhibits superior efficiency and processing speed, which are crucial for interactive applications. Using our proposed system, we believe that people will interact with music in a new and exciting way. It not only advances HCI experiences but also highlights MLA-GRU's effectiveness in scenarios demanding swift and precise gesture recognition.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
Multimedia