Score: 1

Agent-Based Modular Learning for Multimodal Emotion Recognition in Human-Agent Systems

Published: December 2, 2025 | arXiv ID: 2512.10975v1

By: Matvey Nepomnyaschiy , Oleg Pereziabov , Anvar Tliamov and more

Potential Business Impact:

Helps computers understand feelings from faces, voices, words.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Effective human-agent interaction (HAI) relies on accurate and adaptive perception of human emotional states. While multimodal deep learning models - leveraging facial expressions, speech, and textual cues - offer high accuracy in emotion recognition, their training and maintenance are often computationally intensive and inflexible to modality changes. In this work, we propose a novel multi-agent framework for training multimodal emotion recognition systems, where each modality encoder and the fusion classifier operate as autonomous agents coordinated by a central supervisor. This architecture enables modular integration of new modalities (e.g., audio features via emotion2vec), seamless replacement of outdated components, and reduced computational overhead during training. We demonstrate the feasibility of our approach through a proof-of-concept implementation supporting vision, audio, and text modalities, with the classifier serving as a shared decision-making agent. Our framework not only improves training efficiency but also contributes to the design of more flexible, scalable, and maintainable perception modules for embodied and virtual agents in HAI scenarios.

Country of Origin
🇷🇺 Russian Federation


Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)