Sensing and Understanding the World over Air: A Large Multimodal Model for Mobile Networks
By: Zhuoran Duan , Yuhao Wei , Guoshun Nan and more
Potential Business Impact:
Lets phones understand the world using invisible signals.
Large models (LMs), such as ChatGPT, have made a significant impact across diverse domains and hold great potential to facilitate the evolution of network intelligence. Wireless-native multi-modal large models (WMLMs) can sense and understand the physical world through multi-modal data, serving as a key enabler that integrates communication, sensing, and intelligence, and thus they can boost various smart services to billions of users. However, research on WMLMs remains in its infancy, and the construction of domain-specific multi-modal large models for wireless networks is still underexplored. In this paper, we outlines the key characteristics of WMLMs and summarizes existing methods, on the basis of which a wireless-native multimodal training paradigm is proposed. Specifically, we constructed a GPT-style WMLM model and trained it on a real-world large-scale dataset, leveraging wireless signals as an anchor modality for contrastive learning. Our approach demonstrates outstanding performance compared with existing small-scale models and large multi-modal models, validating the feasibility of using wireless signals as a universal modality and highlighting WMLM's potential to emerge as a new paradigm for future wireless networks.
Similar Papers
Large Multimodal Models-Empowered Task-Oriented Autonomous Communications: Design Methodology and Implementation Challenges
Machine Learning (CS)
AI helps machines talk and work together better.
Large Language Models for Next-Generation Wireless Network Management: A Survey and Tutorial
Networking and Internet Architecture
Lets phones understand and fix network problems.
Large Language Models-Empowered Wireless Networks: Fundamentals, Architecture, and Challenges
Networking and Internet Architecture
Makes phones understand feelings for better games.