Breaking the Barriers of Text-Hungry and Audio-Deficient AI
By: Hamidou Tembine , Issa Bamia , Massa NDong and more
Potential Business Impact:
Lets computers understand and speak any language.
While global linguistic diversity spans more than 7164 recognized languages, the current dominant architecture of machine intelligence remains fundamentally biased toward written text. This bias excludes over 700 million people particularly in rural and remote regions who are audio-literate. In this work, we introduce a fully textless, audio-to-audio machine intelligence framework designed to serve this underserved population, and all the people who prefer audio-efficiency. Our contributions include novel Audio-to-Audio translation architectures that bypass text entirely, including spectrogram-, scalogram-, wavelet-, and unit-based models. Central to our approach is the Multiscale Audio-Semantic Transform (MAST), a representation that encodes tonal, prosodic, speaker, and expressive features. We further integrate MAST into a fractional diffusion of mean-field-type framework powered by fractional Brownian motion. It enables the generation of high-fidelity, semantically consistent speech without reliance on textual supervision. The result is a robust and scalable system capable of learning directly from raw audio, even in languages that are unwritten or rarely digitized. This work represents a fundamental shift toward audio-native machine intelligence systems, expanding access to language technologies for communities historically left out of the current machine intelligence ecosystem.
Similar Papers
AISTAT lab system for DCASE2025 Task6: Language-based audio retrieval
Sound
Finds sounds in audio using text descriptions.
MahaTTS: A Unified Framework for Multilingual Text-to-Speech Synthesis
Audio and Speech Processing
Speaks many Indian languages like a person.
A Cascaded Architecture for Extractive Summarization of Multimedia Content via Audio-to-Text Alignment
Information Retrieval
Summarizes long videos into short, easy text.