Diffusion-Augmented Contrastive Learning: A Noise-Robust Encoder for Biosignal Representations
By: Rami Zewail
Potential Business Impact:
Makes machines understand body signals better.
Learning robust representations for biosignals is often hampered by the challenge of designing effective data augmentations.Traditional methods can fail to capture the complex variations inherent in physiological data. Within this context, we propose a novel hybrid framework, Diffusion-Augmented Contrastive Learning (DACL), that fuses concepts from diffusion models and supervised contrastive learning. The DACL framework operates on a latent space created by a lightweight Variational Autoencoder (VAE) trained on our novel Scattering Transformer (ST) features [12]. It utilizes the diffusion forward process as a principled data augmentation technique to generate multiple noisy views of these latent embeddings. A U-Net style encoder is then trained with a supervised contrastive objective to learn a representation that balances class discrimination with robustness to noise across various diffusion time steps. We evaluated this proof-of-concept method on the PhysioNet 2017 ECG dataset, achieving a competitive AUROC of 0.7815. This work establishes a new paradigm for representation learning by using the diffusion process itself to drive the contrastive objective, creating noise-invariant embeddings that demonstrate a strong foundation for class separability.
Similar Papers
Diffusion-augmented Graph Contrastive Learning for Collaborative Filter
Information Retrieval
Helps movie apps suggest better films for you.
Automated Learning of Semantic Embedding Representations for Diffusion Models
Machine Learning (CS)
Makes computers understand pictures better for learning.
Simple Graph Contrastive Learning via Fractional-order Neural Diffusion Networks
Machine Learning (CS)
Helps computers learn from connected data without extra examples.