When marine radar target detection meets pretrained large language models
By: Qiying Hu , Linping Zhang , Xueqian Wang and more
Potential Business Impact:
Helps radar see weather patterns better.
Deep learning (DL) methods are widely used to extract high-dimensional patterns from the sequence features of radar echo signals. However, conventional DL algorithms face challenges such as redundant feature segments, and constraints from restricted model sizes. To address these issues, we propose a framework that integrates feature preprocessing with large language models (LLMs). Our preprocessing module tokenizes radar sequence features, applies a patch selection algorithm to filter out uninformative segments, and projects the selected patches into embeddings compatible with the feature space of pre-trained LLMs. Leveraging these refined embeddings, we incorporate a pre-trained LLM, fine-tuning only the normalization layers to reduce training burdens while enhancing performance. Experiments on measured datasets demonstrate that the proposed method significantly outperforms the state-of-the-art baselines on supervised learning tests.
Similar Papers
RadarLLM: Adapting Pretrained Large Language Models for Marine Radar Target Detection with Preference-aware Loss
Signal Processing
Helps radar find boats in bad weather.
Can Large Language Models Identify Materials from Radar Signals?
Signal Processing
Robots use radar to guess what things are made of.
A Foundation Model for Massive MIMO Precoding with an Adaptive per-User Rate-Power Tradeoff
Signal Processing
Makes wireless signals use less power.