Mixture-of-Experts for Personalized and Semantic-Aware Next Location Prediction
By: Shuai Liu , Ning Cao , Yile Chen and more
Potential Business Impact:
Predicts where people will go next, better.
Next location prediction plays a critical role in understanding human mobility patterns. However, existing approaches face two core limitations: (1) they fall short in capturing the complex, multi-functional semantics of real-world locations; and (2) they lack the capacity to model heterogeneous behavioral dynamics across diverse user groups. To tackle these challenges, we introduce NextLocMoE, a novel framework built upon large language models (LLMs) and structured around a dual-level Mixture-of-Experts (MoE) design. Our architecture comprises two specialized modules: a Location Semantics MoE that operates at the embedding level to encode rich functional semantics of locations, and a Personalized MoE embedded within the Transformer backbone to dynamically adapt to individual user mobility patterns. In addition, we incorporate a history-aware routing mechanism that leverages long-term trajectory data to enhance expert selection and ensure prediction stability. Empirical evaluations across several real-world urban datasets show that NextLocMoE achieves superior performance in terms of predictive accuracy, cross-domain generalization, and interpretability
Similar Papers
TrajMoE: Spatially-Aware Mixture of Experts for Unified Human Mobility Modeling
Artificial Intelligence
Helps predict city travel patterns anywhere.
Mixture of Experts in Large Language Models
Machine Learning (CS)
Makes smart computer programs learn faster and better.
MoE-Loco: Mixture of Experts for Multitask Locomotion
Robotics
Robots learn to walk on any surface.