HemBLIP: A Vision-Language Model for Interpretable Leukemia Cell Morphology Analysis
By: Julie van Logtestijn, Petru Manescu
Potential Business Impact:
Helps doctors see leukemia in blood cell pictures.
Microscopic evaluation of white blood cell morphology is central to leukemia diagnosis, yet current deep learning models often act as black boxes, limiting clinical trust and adoption. We introduce HemBLIP, a vision language model designed to generate interpretable, morphology aware descriptions of peripheral blood cells. Using a newly constructed dataset of 14k healthy and leukemic cells paired with expert-derived attribute captions, we adapt a general-purpose VLM via both full fine-tuning and LoRA based parameter efficient training, and benchmark against the biomedical foundation model MedGEMMA. HemBLIP achieves higher caption quality and morphological accuracy, while LoRA adaptation provides further gains with significantly reduced computational cost. These results highlight the promise of vision language models for transparent and scalable hematological diagnostics.
Similar Papers
Uni-Hema: Unified Model for Digital Hematopathology
CV and Pattern Recognition
Helps doctors diagnose blood diseases faster and better.
A Multicenter Benchmark of Multiple Instance Learning Models for Lymphoma Subtyping from HE-stained Whole Slide Images
CV and Pattern Recognition
Helps doctors find cancer faster from pictures.
MORPHFED: Federated Learning for Cross-institutional Blood Morphology Analysis
Machine Learning (CS)
Helps doctors diagnose blood diseases anywhere.