Investigating Modality Contribution in Audio LLMs for Music
By: Giovana Morais, Magdalena Fuentes
Potential Business Impact:
Helps AI understand music by listening, not just reading.
Audio Large Language Models (Audio LLMs) enable human-like conversation about music, yet it is unclear if they are truly listening to the audio or just using textual reasoning, as recent benchmarks suggest. This paper investigates this issue by quantifying the contribution of each modality to a model's output. We adapt the MM-SHAP framework, a performance-agnostic score based on Shapley values that quantifies the relative contribution of each modality to a model's prediction. We evaluate two models on the MuChoMusic benchmark and find that the model with higher accuracy relies more on text to answer questions, but further inspection shows that even if the overall audio contribution is low, models can successfully localize key sound events, suggesting that audio is not entirely ignored. Our study is the first application of MM-SHAP to Audio LLMs and we hope it will serve as a foundational step for future research in explainable AI and audio.
Similar Papers
Probing Audio-Generation Capabilities of Text-Based Language Models
Sound
Computers learn to make sounds from words.
Audio Large Language Models Can Be Descriptive Speech Quality Evaluators
Sound
Helps computers understand if speech sounds good.
Multifaceted Evaluation of Audio-Visual Capability for MLLMs: Effectiveness, Efficiency, Generalizability and Robustness
Multimedia
Tests how AI understands sound and pictures.