An Intelligent AI glasses System with Multi-Agent Architecture for Real-Time Voice Processing and Task Execution
By: Sheng-Kai Chen , Jyh-Horng Wu , Ching-Yao Lin and more
This paper presents an AI glasses system that integrates real-time voice processing, artificial intelligence(AI) agents, and cross-network streaming capabilities. The system employs dual-agent architecture where Agent 01 handles Automatic Speech Recognition (ASR) and Agent 02 manages AI processing through local Large Language Models (LLMs), Model Context Protocol (MCP) tools, and Retrieval-Augmented Generation (RAG). The system supports real-time RTSP streaming for voice and video data transmission, eye tracking data collection, and remote task execution through RabbitMQ messaging. Implementation demonstrates successful voice command processing with multilingual support and cross-platform task execution capabilities.
Similar Papers
Multi-Channel Differential ASR for Robust Wearer Speech Recognition on Smart Glasses
Audio and Speech Processing
Clears background noise for better voice commands.
AsyncVoice Agent: Real-Time Explanation for LLM Planning and Reasoning
Audio and Speech Processing
Lets you talk to AI while it thinks.
AR Secretary Agent: Real-time Memory Augmentation via LLM-powered Augmented Reality Glasses
Human-Computer Interaction
Helps you remember people and talks easily.