Decoding Selective Auditory Attention to Musical Elements in Ecologically Valid Music Listening
By: Taketo Akama , Zhuohao Zhang , Tsukasa Nagashima and more
Art has long played a profound role in shaping human emotion, cognition, and behavior. While visual arts such as painting and architecture have been studied through eye tracking, revealing distinct gaze patterns between experts and novices, analogous methods for auditory art forms remain underdeveloped. Music, despite being a pervasive component of modern life and culture, still lacks objective tools to quantify listeners' attention and perceptual focus during natural listening experiences. To our knowledge, this is the first attempt to decode selective attention to musical elements using naturalistic, studio-produced songs and a lightweight consumer-grade EEG device with only four electrodes. By analyzing neural responses during real world like music listening, we test whether decoding is feasible under conditions that minimize participant burden and preserve the authenticity of the musical experience. Our contributions are fourfold: (i) decoding music attention in real studio-produced songs, (ii) demonstrating feasibility with a four-channel consumer EEG, (iii) providing insights for music attention decoding, and (iv) demonstrating improved model ability over prior work. Our findings suggest that musical attention can be decoded not only for novel songs but also across new subjects, showing performance improvements compared to existing approaches under our tested conditions. These findings show that consumer-grade devices can reliably capture signals, and that neural decoding in music could be feasible in real-world settings. This paves the way for applications in education, personalized music technologies, and therapeutic interventions.
Similar Papers
Melodia: Training-Free Music Editing Guided by Attention Probing in Diffusion Models
Sound
Changes music's style without breaking its rhythm.
Melodia: Training-Free Music Editing Guided by Attention Probing in Diffusion Models
Sound
Changes music's style without breaking its rhythm.
Melodia: Training-Free Music Editing Guided by Attention Probing in Diffusion Models
Sound
Changes music's style without breaking its rhythm.