Your voice is your voice: Supporting Self-expression through Speech Generation and LLMs in Augmented and Alternative Communication
By: Yiwen Xu , Monideep Chakraborti , Tianyi Zhang and more
Potential Business Impact:
Helps people speak with more feeling and detail.
In this paper, we present Speak Ease: an augmentative and alternative communication (AAC) system to support users' expressivity by integrating multimodal input, including text, voice, and contextual cues (conversational partner and emotional tone), with large language models (LLMs). Speak Ease combines automatic speech recognition (ASR), context-aware LLM-based outputs, and personalized text-to-speech technologies to enable more personalized, natural-sounding, and expressive communication. Through an exploratory feasibility study and focus group evaluation with speech and language pathologists (SLPs), we assessed Speak Ease's potential to enable expressivity in AAC. The findings highlight the priorities and needs of AAC users and the system's ability to enhance user expressivity by supporting more personalized and contextually relevant communication. This work provides insights into the use of multimodal inputs and LLM-driven features to improve AAC systems and support expressivity.
Similar Papers
SpeakEasy: Enhancing Text-to-Speech Interactions for Expressive Content Creation
Human-Computer Interaction
Makes talking videos sound like you want.
A Novel Data Augmentation Approach for Automatic Speaking Assessment on Opinion Expressions
Computation and Language
Teaches computers to judge speaking skills from voice.
ImageTalk: Designing a Multimodal AAC Text Generation System Driven by Image Recognition and Natural Language Generation
Human-Computer Interaction
Helps people with speech problems talk faster.