Seeing isn't Hearing: Benchmarking Vision Language Models at Interpreting Spectrograms
By: Tyler Loakman, Joseph James, Chenghua Lin
Potential Business Impact:
Computers can't yet "hear" sounds from pictures.
With the rise of Large Language Models (LLMs) and their vision-enabled counterparts (VLMs), numerous works have investigated their capabilities in tasks that fuse the modalities of vision and language. In this work, we benchmark the extent to which VLMs are able to act as highly-trained phoneticians, interpreting spectrograms and waveforms of speech. To do this, we synthesise a novel dataset containing 4k+ English words spoken in isolation alongside stylistically consistent spectrogram and waveform figures. We test the ability of VLMs to understand these representations of speech through a multiple-choice task whereby models must predict the correct phonemic or graphemic transcription of a spoken word when presented amongst 3 distractor transcriptions that have been selected based on their phonemic edit distance to the ground truth. We observe that both zero-shot and finetuned models rarely perform above chance, demonstrating the requirement for specific parametric knowledge of how to interpret such figures, rather than paired samples alone.
Similar Papers
Hidden in plain sight: VLMs overlook their visual representations
CV and Pattern Recognition
Makes computers better at understanding pictures.
Knowledge-Augmented Vision Language Models for Underwater Bioacoustic Spectrogram Analysis
CV and Pattern Recognition
Lets computers understand whale songs without training.
A Survey on Efficient Vision-Language Models
CV and Pattern Recognition
Makes smart AI work on small, slow devices.