Score: 0

Sensing and Understanding the World over Air: A Large Multimodal Model for Mobile Networks

Published: November 17, 2025 | arXiv ID: 2511.21707v1

By: Zhuoran Duan , Yuhao Wei , Guoshun Nan and more

Potential Business Impact:

Lets phones understand the world using invisible signals.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large models (LMs), such as ChatGPT, have made a significant impact across diverse domains and hold great potential to facilitate the evolution of network intelligence. Wireless-native multi-modal large models (WMLMs) can sense and understand the physical world through multi-modal data, serving as a key enabler that integrates communication, sensing, and intelligence, and thus they can boost various smart services to billions of users. However, research on WMLMs remains in its infancy, and the construction of domain-specific multi-modal large models for wireless networks is still underexplored. In this paper, we outlines the key characteristics of WMLMs and summarizes existing methods, on the basis of which a wireless-native multimodal training paradigm is proposed. Specifically, we constructed a GPT-style WMLM model and trained it on a real-world large-scale dataset, leveraging wireless signals as an anchor modality for contrastive learning. Our approach demonstrates outstanding performance compared with existing small-scale models and large multi-modal models, validating the feasibility of using wireless signals as a universal modality and highlighting WMLM's potential to emerge as a new paradigm for future wireless networks.

Page Count
7 pages

Category
Computer Science:
Networking and Internet Architecture