BeamLLM: Vision-Empowered mmWave Beam Prediction with Large Language Models
By: Can Zheng , Jiguang He , Guofa Cai and more
Potential Business Impact:
Helps self-driving cars see around corners.
In this paper, we propose BeamLLM, a vision-aided millimeter-wave (mmWave) beam prediction framework leveraging large language models (LLMs) to address the challenges of high training overhead and latency in mmWave communication systems. By combining computer vision (CV) with LLMs' cross-modal reasoning capabilities, the framework extracts user equipment (UE) positional features from RGB images and aligns visual-temporal features with LLMs' semantic space through reprogramming techniques. Evaluated on a realistic vehicle-to-infrastructure (V2I) scenario, the proposed method achieves 61.01% top-1 accuracy and 97.39% top-3 accuracy in standard prediction tasks, significantly outperforming traditional deep learning models. In few-shot prediction scenarios, the performance degradation is limited to 12.56% (top-1) and 5.55% (top-3) from time sample 1 to 10, demonstrating superior prediction capability.
Similar Papers
M2BeamLLM: Multimodal Sensing-empowered mmWave Beam Prediction with Large Language Models
Computation and Language
Helps cars talk to buildings better.
Vehicle-to-Infrastructure Collaborative Spatial Perception via Multimodal Large Language Models
Machine Learning (CS)
Helps cars talk to each other better, even in bad weather.
Cross-Environment Transfer Learning for Location-Aided Beam Prediction in 5G and Beyond Millimeter-Wave Networks
Signal Processing
Teaches phones to connect faster using less data.