An Automated Multi-Modal Evaluation Framework for Mobile Intelligent Assistants
By: Meiping Wang , Jian Zhong , Rongduo Han and more
Potential Business Impact:
Tests smart helpers automatically for better results.
With the rapid development of mobile intelligent assistant technologies, multi-modal AI assistants have become essential interfaces for daily user interactions. However, current evaluation methods face challenges including high manual costs, inconsistent standards, and subjective bias. This paper proposes an automated multi-modal evaluation framework based on large language models and multi-agent collaboration. The framework employs a three-tier agent architecture consisting of interaction evaluation agents, semantic verification agents, and experience decision agents. Through supervised fine-tuning on the Qwen3-8B model, we achieve a significant evaluation matching accuracy with human experts. Experimental results on eight major intelligent agents demonstrate the framework's effectiveness in predicting users' satisfaction and identifying generation defects.
Similar Papers
Beyond Task Completion: An Assessment Framework for Evaluating Agentic AI Systems
Multiagent Systems
Tests AI agents on how they work together.
Agent-Based Modular Learning for Multimodal Emotion Recognition in Human-Agent Systems
Machine Learning (CS)
Helps computers understand feelings from faces, voices, words.
Dynamic Evaluation Framework for Personalized and Trustworthy Agents: A Multi-Session Approach to Preference Adaptability
Information Retrieval
Tests AI helpers to make sure they're trustworthy.