The Curious Case of Visual Grounding: Different Effects for Speech- and Text-based Language Encoders
By: Adrian Sauter, Willem Zuidema, Marianne de Heer Kloots
Potential Business Impact:
Shows computers how words sound and look.
How does visual information included in training affect language processing in audio- and text-based deep learning models? We explore how such visual grounding affects model-internal representations of words, and find substantially different effects in speech- vs. text-based language encoders. Firstly, global representational comparisons reveal that visual grounding increases alignment between representations of spoken and written language, but this effect seems mainly driven by enhanced encoding of word identity rather than meaning. We then apply targeted clustering analyses to probe for phonetic vs. semantic discriminability in model representations. Speech-based representations remain phonetically dominated with visual grounding, but in contrast to text-based representations, visual grounding does not improve semantic discriminability. Our findings could usefully inform the development of more efficient methods to enrich speech-based models with visually-informed semantics.
Similar Papers
Does Visual Grounding Enhance the Understanding of Embodied Knowledge in Large Language Models?
Computation and Language
Computers still can't truly understand the world.
Leveraging Audio-Visual Data to Reduce the Multilingual Gap in Self-Supervised Speech Models
Computation and Language
Helps computers understand many languages better.
Towards Understanding Visual Grounding in Visual Language Models
CV and Pattern Recognition
Helps computers understand what's in pictures.