Self-learned representation-guided latent diffusion model for breast cancer classification in deep ultraviolet whole surface images
By: Pouya Afshin , David Helminiak , Tianling Niu and more
Potential Business Impact:
Makes cancer surgery safer by seeing tiny cells.
Breast-Conserving Surgery (BCS) requires precise intraoperative margin assessment to preserve healthy tissue. Deep Ultraviolet Fluorescence Scanning Microscopy (DUV-FSM) offers rapid, high-resolution surface imaging for this purpose; however, the scarcity of annotated DUV data hinders the training of robust deep learning models. To address this, we propose an Self-Supervised Learning (SSL)-guided Latent Diffusion Model (LDM) to generate high-quality synthetic training patches. By guiding the LDM with embeddings from a fine-tuned DINO teacher, we inject rich semantic details of cellular structures into the synthetic data. We combine real and synthetic patches to fine-tune a Vision Transformer (ViT), utilizing patch prediction aggregation for WSI-level classification. Experiments using 5-fold cross-validation demonstrate that our method achieves 96.47 % accuracy and reduces the FID score to 45.72, significantly outperforming class-conditioned baselines.
Similar Papers
Breast Cancer Classification in Deep Ultraviolet Fluorescence Images Using a Patch-Level Vision Transformer Framework
Image and Video Processing
Helps doctors see cancer on tissue samples.
DA-SSL: self-supervised domain adaptor to leverage foundational models in turbt histopathology slides
CV and Pattern Recognition
Helps doctors spot bladder cancer better.
DiffKD-DCIS: Predicting Upgrade of Ductal Carcinoma In Situ with Diffusion Augmentation and Knowledge Distillation
CV and Pattern Recognition
Helps doctors spot breast cancer early.