Imagine to Hear: Auditory Knowledge Generation can be an Effective Assistant for Language Models
By: Suho Yoo, Hyunjong Ok, Jaeho Lee
Potential Business Impact:
Computers learn to understand sounds from words alone.
Language models pretrained on text-only corpora often struggle with tasks that require auditory commonsense knowledge. Previous work addresses this problem by augmenting the language model to retrieve knowledge from external audio databases. This approach has several limitations, such as the potential lack of relevant audio in databases and the high costs associated with constructing the databases. To address these issues, we propose Imagine to Hear, a novel approach that dynamically generates auditory knowledge using generative models. Our framework detects multiple audio-related textual spans from the given prompt and generates corresponding auditory knowledge. We develop several mechanisms to efficiently process multiple auditory knowledge, including a CLAP-based rejection sampler and a language-audio fusion module. Our experiments show that our method achieves state-of-the-art performance on AuditoryBench without relying on external databases, highlighting the effectiveness of our generation-based approach.
Similar Papers
AuditoryBench++: Can Language Models Understand Auditory Knowledge without Hearing?
Computation and Language
Teaches computers to "hear" and understand sounds.
Probing Audio-Generation Capabilities of Text-Based Language Models
Sound
Computers learn to make sounds from words.
Audio-Thinker: Guiding Audio Language Model When and How to Think via Reinforcement Learning
Sound
Helps computers understand spoken questions better.