Score: 1

Retrieval Augmented Generation with Multi-Modal LLM Framework for Wireless Environments

Published: March 9, 2025 | arXiv ID: 2503.07670v1

By: Muhammad Ahmed Mohsin , Ahsan Bilal , Sagnik Bhattacharya and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Makes wireless internet faster and more reliable.

Business Areas:
Augmented Reality Hardware, Software

Future wireless networks aim to deliver high data rates and lower power consumption while ensuring seamless connectivity, necessitating robust optimization. Large language models (LLMs) have been deployed for generalized optimization scenarios. To take advantage of generative AI (GAI) models, we propose retrieval augmented generation (RAG) for multi-sensor wireless environment perception. Utilizing domain-specific prompt engineering, we apply RAG to efficiently harness multimodal data inputs from sensors in a wireless environment. Key pre-processing pipelines including image-to-text conversion, object detection, and distance calculations for multimodal RAG input from multi-sensor data are proposed to obtain a unified vector database crucial for optimizing LLMs in global wireless tasks. Our evaluation, conducted with OpenAI's GPT and Google's Gemini models, demonstrates an 8%, 8%, 10%, 7%, and 12% improvement in relevancy, faithfulness, completeness, similarity, and accuracy, respectively, compared to conventional LLM-based designs. Furthermore, our RAG-based LLM framework with vectorized databases is computationally efficient, providing real-time convergence under latency constraints.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
Networking and Internet Architecture