Score: 0

Mixture-of-Experts for Personalized and Semantic-Aware Next Location Prediction

Published: May 30, 2025 | arXiv ID: 2505.24597v1

By: Shuai Liu , Ning Cao , Yile Chen and more

Potential Business Impact:

Predicts where people will go next, better.

Business Areas:
Indoor Positioning Navigation and Mapping

Next location prediction plays a critical role in understanding human mobility patterns. However, existing approaches face two core limitations: (1) they fall short in capturing the complex, multi-functional semantics of real-world locations; and (2) they lack the capacity to model heterogeneous behavioral dynamics across diverse user groups. To tackle these challenges, we introduce NextLocMoE, a novel framework built upon large language models (LLMs) and structured around a dual-level Mixture-of-Experts (MoE) design. Our architecture comprises two specialized modules: a Location Semantics MoE that operates at the embedding level to encode rich functional semantics of locations, and a Personalized MoE embedded within the Transformer backbone to dynamically adapt to individual user mobility patterns. In addition, we incorporate a history-aware routing mechanism that leverages long-term trajectory data to enhance expert selection and ensure prediction stability. Empirical evaluations across several real-world urban datasets show that NextLocMoE achieves superior performance in terms of predictive accuracy, cross-domain generalization, and interpretability

Page Count
19 pages

Category
Computer Science:
Artificial Intelligence