Probing Audio-Generation Capabilities of Text-Based Language Models
By: Arjun Prasaath Anbazhagan , Parteek Kumar , Ujjwal Kaur and more
Potential Business Impact:
Computers learn to make sounds from words.
How does textual representation of audio relate to the Large Language Model's (LLMs) learning about the audio world? This research investigates the extent to which LLMs can be prompted to generate audio, despite their primary training in textual data. We employ a three-tier approach, progressively increasing the complexity of audio generation: 1) Musical Notes, 2) Environmental Sounds, and 3) Human Speech. To bridge the gap between text and audio, we leverage code as an intermediary, prompting LLMs to generate code that, when executed, produces the desired audio output. To evaluate the quality and accuracy of the generated audio, we employ FAD and CLAP scores. Our findings reveal that while LLMs can generate basic audio features, their performance deteriorates as the complexity of the audio increases. This suggests that while LLMs possess a latent understanding of the auditory world, their ability to translate this understanding into tangible audio output remains rudimentary. Further research into techniques that can enhance the quality and diversity of LLM-generated audio can lead to an improvement in the performance of text-based LLMs in generating audio.
Similar Papers
Investigating Modality Contribution in Audio LLMs for Music
Machine Learning (CS)
Helps AI understand music by listening, not just reading.
Audio-Language Models for Audio-Centric Tasks: A survey
Sound
Computers understand sounds like humans do.
Exploring Fine-Tuning of Large Audio Language Models for Spoken Language Understanding under Limited Speech data
Sound
Teaches computers to understand speech better with less data.