PhysLLM: Harnessing Large Language Models for Cross-Modal Remote Physiological Sensing
By: Yiping Xie , Bo Zhao , Mingtong Dai and more
Potential Business Impact:
Measures heart rate accurately even with bad lighting.
Remote photoplethysmography (rPPG) enables non-contact physiological measurement but remains highly susceptible to illumination changes, motion artifacts, and limited temporal modeling. Large Language Models (LLMs) excel at capturing long-range dependencies, offering a potential solution but struggle with the continuous, noise-sensitive nature of rPPG signals due to their text-centric design. To bridge this gap, we introduce PhysLLM, a collaborative optimization framework that synergizes LLMs with domain-specific rPPG components. Specifically, the Text Prototype Guidance (TPG) strategy is proposed to establish cross-modal alignment by projecting hemodynamic features into LLM-interpretable semantic space, effectively bridging the representational gap between physiological signals and linguistic tokens. Besides, a novel Dual-Domain Stationary (DDS) Algorithm is proposed for resolving signal instability through adaptive time-frequency domain feature re-weighting. Finally, rPPG task-specific cues systematically inject physiological priors through physiological statistics, environmental contextual answering, and task description, leveraging cross-modal learning to integrate both visual and textual information, enabling dynamic adaptation to challenging scenarios like variable illumination and subject movements. Evaluation on four benchmark datasets, PhysLLM achieves state-of-the-art accuracy and robustness, demonstrating superior generalization across lighting variations and motion scenarios.
Similar Papers
Remote Photoplethysmography in Real-World and Extreme Lighting Scenarios
CV and Pattern Recognition
Reads your heartbeat from a video.
Memory-efficient Low-latency Remote Photoplethysmography through Temporal-Spatial State Space Duality
CV and Pattern Recognition
Measures heart rate from faces without touching.
A Nutrition Multimodal Photoplethysmography Language Model
Machine Learning (CS)
Tracks eating habits using your pulse and food words.