Latent Sensor Fusion: Multimedia Learning of Physiological Signals for Resource-Constrained Devices
By: Abdullah Ahmed, Jeremy Gummeson
Potential Business Impact:
Lets computers understand many body signals together.
Latent spaces offer an efficient and effective means of summarizing data while implicitly preserving meta-information through relational encoding. We leverage these meta-embeddings to develop a modality-agnostic, unified encoder. Our method employs sensor-latent fusion to analyze and correlate multimodal physiological signals. Using a compressed sensing approach with autoencoder-based latent space fusion, we address the computational challenges of biosignal analysis on resource-constrained devices. Experimental results show that our unified encoder is significantly faster, lighter, and more scalable than modality-specific alternatives, without compromising representational accuracy.
Similar Papers
Latent Space Data Fusion Outperforms Early Fusion in Multimodal Mental Health Digital Phenotyping Data
Machine Learning (CS)
Helps doctors predict depression using phone data.
The Latent Space Hypothesis: Toward Universal Medical Representation Learning
Quantitative Methods
Doctors see your unique health path.
Leveraging Foundational Models and Simple Fusion for Multi-modal Physiological Signal Analysis
Machine Learning (CS)
Combines heart and brain signals for better health insights.