VOX-KRIKRI: Unifying Speech and Language through Continuous Fusion
By: Dimitrios Damianos , Leon Voukoutis , Georgios Paraskevopoulos and more
Potential Business Impact:
Lets computers understand and talk like humans.
We present a multimodal fusion framework that bridges pre-trained decoder-based large language models (LLM) and acoustic encoder-decoder architectures such as Whisper, with the aim of building speech-enabled LLMs. Instead of directly using audio embeddings, we explore an intermediate audio-conditioned text space as a more effective mechanism for alignment. Our method operates fully in continuous text representation spaces, fusing Whisper's hidden decoder states with those of an LLM through cross-modal attention, and supports both offline and streaming modes. We introduce \textit{VoxKrikri}, the first Greek speech LLM, and show through analysis that our approach effectively aligns representations across modalities. These results highlight continuous space fusion as a promising path for multilingual and low-resource speech LLMs, while achieving state-of-the-art results for Automatic Speech Recognition in Greek, providing an average $\sim20\%$ relative improvement across benchmarks.
Similar Papers
Continual Speech Learning with Fused Speech Features
Computation and Language
Lets computers learn new speech tasks faster.
Continuous-Token Diffusion for Speaker-Referenced TTS in Multimodal LLMs
Audio and Speech Processing
Makes computers talk like real people.
Continuous-Token Diffusion for Speaker-Referenced TTS in Multimodal LLMs
Audio and Speech Processing
Makes computers speak more like real people.