Learning Multi-Modal Mobility Dynamics for Generalized Next Location Recommendation
By: Junshu Dai , Yu Wang , Tongya Zheng and more
Potential Business Impact:
Helps apps guess where you'll go next.
The precise prediction of human mobility has produced significant socioeconomic impacts, such as location recommendations and evacuation suggestions. However, existing methods suffer from limited generalization capability: unimodal approaches are constrained by data sparsity and inherent biases, while multi-modal methods struggle to effectively capture mobility dynamics caused by the semantic gap between static multi-modal representation and spatial-temporal dynamics. Therefore, we leverage multi-modal spatial-temporal knowledge to characterize mobility dynamics for the location recommendation task, dubbed as \textbf{M}ulti-\textbf{M}odal \textbf{Mob}ility (\textbf{M}$^3$\textbf{ob}). First, we construct a unified spatial-temporal relational graph (STRG) for multi-modal representation, by leveraging the functional semantics and spatial-temporal knowledge captured by the large language models (LLMs)-enhanced spatial-temporal knowledge graph (STKG). Second, we design a gating mechanism to fuse spatial-temporal graph representations of different modalities, and propose an STKG-guided cross-modal alignment to inject spatial-temporal dynamic knowledge into the static image modality. Extensive experiments on six public datasets show that our proposed method not only achieves consistent improvements in normal scenarios but also exhibits significant generalization ability in abnormal scenarios.
Similar Papers
Unsupervised Multimodal Graph-based Model for Geo-social Analysis
Social and Information Networks
Finds important news in social media posts.
Where to Go Next Day: Multi-scale Spatial-Temporal Decoupled Model for Mid-term Human Mobility Prediction
Artificial Intelligence
Predicts where people will go next week.
M3DMap: Object-aware Multimodal 3D Mapping for Dynamic Environments
CV and Pattern Recognition
Helps robots build 3D maps of moving things.