AgentSense: LLMs Empower Generalizable and Explainable Web-Based Participatory Urban Sensing
By: Xusen Guo , Mingxing Peng , Xixuan Hao and more
Potential Business Impact:
Helps cities understand problems by asking people.
Web-based participatory urban sensing has emerged as a vital approach for modern urban management by leveraging mobile individuals as distributed sensors. However, existing urban sensing systems struggle with limited generalization across diverse urban scenarios and poor interpretability in decision-making. In this work, we introduce AgentSense, a hybrid, training-free framework that integrates large language models (LLMs) into participatory urban sensing through a multi-agent evolution system. AgentSense initially employs classical planner to generate baseline solutions and then iteratively refines them to adapt sensing task assignments to dynamic urban conditions and heterogeneous worker preferences, while producing natural language explanations that enhance transparency and trust. Extensive experiments across two large-scale mobility datasets and seven types of dynamic disturbances demonstrate that AgentSense offers distinct advantages in adaptivity and explainability over traditional methods. Furthermore, compared to single-agent LLM baselines, our approach outperforms in both performance and robustness, while delivering more reasonable and transparent explanations. These results position AgentSense as a significant advancement towards deploying adaptive and explainable urban sensing systems on the web.
Similar Papers
AgentSense: Virtual Sensor Data Generation Using LLM Agents in Simulated Home Environments
CV and Pattern Recognition
Creates realistic smart home activity data for AI.
Urban-MAS: Human-Centered Urban Prediction with LLM-Based Multi-Agent System
Multiagent Systems
Helps city computers predict crowds and traffic.
ProAgent: Harnessing On-Demand Sensory Contexts for Proactive LLM Agent Systems
Artificial Intelligence
Helps smart glasses help you before you ask.