Data-driven RF Tomography via Cross-modal Sensing and Continual Learning
By: Yang Zhao, Tao Wang, Said Elhadi
Potential Business Impact:
Finds buried roots even when things change.
Data-driven radio frequency (RF) tomography has demonstrated significant potential for underground target detection, due to the penetrative nature of RF signals through soil. However, it is still challenging to achieve accurate and robust performance in dynamic environments. In this work, we propose a data-driven radio frequency tomography (DRIFT) framework with the following key components to reconstruct cross section images of underground root tubers, even with significant changes in RF signals. First, we design a cross-modal sensing system with RF and visual sensors, and propose to train an RF tomography deep neural network (DNN) model following the cross-modal learning approach. Then we propose to apply continual learning to automatically update the DNN model, once environment changes are detected in a dynamic environment. Experimental results show that our approach achieves an average equivalent diameter error of 2.29 cm, 23.2% improvement upon the state-of-the-art approach. Our DRIFT code and dataset are publicly available on https://github.com/Data-driven-RTI/DRIFT.
Similar Papers
Comprehensive Evaluation of Rule-Based, Machine Learning, and Deep Learning in Human Estimation Using Radio Wave Sensing: Accuracy, Spatial Generalization, and Output Granularity Trade-offs
CV and Pattern Recognition
Finds people in rooms, even if layout changes.
Comprehensive Deployment-Oriented Assessment for Cross-Environment Generalization in Deep Learning-Based mmWave Radar Sensing
CV and Pattern Recognition
Radar sees people accurately in new places.
3D Dynamic Radio Map Prediction Using Vision Transformers for Low-Altitude Wireless Networks
Machine Learning (CS)
Helps drones stay connected in the air.