LUMIA: A Handheld Vision-to-Music System for Real-Time, Embodied Composition
By: Chung-Ta Huang, Connie Cheng, Vealy Lai
Most digital music tools emphasize precision and control, but often lack support for tactile, improvisational workflows grounded in environmental interaction. Lumia addresses this by enabling users to "compose through looking"--transforming visual scenes into musical phrases using a handheld, camera-based interface and large multimodal models. A vision-language model (GPT-4V) analyzes captured imagery to generate structured prompts, which, combined with user-selected instrumentation, guide a text-to-music pipeline (Stable Audio). This real-time process allows users to frame, capture, and layer audio interactively, producing loopable musical segments through embodied interaction. The system supports a co-creative workflow where human intent and model inference shape the musical outcome. By embedding generative AI within a physical device, Lumia bridges perception and composition, introducing a new modality for creative exploration that merges vision, language, and sound. It repositions generative music not as a task of parameter tuning, but as an improvisational practice driven by contextual, sensory engagement.
Similar Papers
Zero-Effort Image-to-Music Generation: An Interpretable RAG-based VLM Approach
Sound
Turns pictures into music with explanations.
Rhythm in the Air: Vision-based Real-Time Music Generation through Gestures
Multimedia
Lets you make music by waving your hands.
A Real-Time Gesture-Based Control Framework
Human-Computer Interaction
Lets dancers change music with their moves.