Score: 2

Large Multimodal Models-Empowered Task-Oriented Autonomous Communications: Design Methodology and Implementation Challenges

Published: October 23, 2025 | arXiv ID: 2510.20637v1

By: Hyun Jong Yang , Hyunsoo Kim , Hyeonho Noh and more

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

AI helps machines talk and work together better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) and large multimodal models (LMMs) have achieved unprecedented breakthrough, showcasing remarkable capabilities in natural language understanding, generation, and complex reasoning. This transformative potential has positioned them as key enablers for 6G autonomous communications among machines, vehicles, and humanoids. In this article, we provide an overview of task-oriented autonomous communications with LLMs/LMMs, focusing on multimodal sensing integration, adaptive reconfiguration, and prompt/fine-tuning strategies for wireless tasks. We demonstrate the framework through three case studies: LMM-based traffic control, LLM-based robot scheduling, and LMM-based environment-aware channel estimation. From experimental results, we show that the proposed LLM/LMM-aided autonomous systems significantly outperform conventional and discriminative deep learning (DL) model-based techniques, maintaining robustness under dynamic objectives, varying input parameters, and heterogeneous multimodal conditions where conventional static optimization degrades.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡°πŸ‡· Korea, Republic of, United States

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)