Pathology Context Recalibration Network for Ocular Disease Recognition
By: Zunjie Xiao , Xiaoqing Zhang , Risa Higashita and more
Pathology context and expert experience play significant roles in clinical ocular disease diagnosis. Although deep neural networks (DNNs) have good ocular disease recognition results, they often ignore exploring the clinical pathology context and expert experience priors to improve ocular disease recognition performance and decision-making interpretability. To this end, we first develop a novel Pathology Recalibration Module (PRM) to leverage the potential of pathology context prior via the combination of the well-designed pixel-wise context compression operator and pathology distribution concentration operator; then this paper applies a novel expert prior Guidance Adapter (EPGA) to further highlight significant pixel-wise representation regions by fully mining the expert experience prior. By incorporating PRM and EPGA into the modern DNN, the PCRNet is constructed for automated ocular disease recognition. Additionally, we introduce an Integrated Loss (IL) to boost the ocular disease recognition performance of PCRNet by considering the effects of sample-wise loss distributions and training label frequencies. The extensive experiments on three ocular disease datasets demonstrate the superiority of PCRNet with IL over state-of-the-art attention-based networks and advanced loss methods. Further visualization analysis explains the inherent behavior of PRM and EPGA that affects the decision-making process of DNNs.
Similar Papers
Explaining Digital Pathology Models via Clustering Activations
CV and Pattern Recognition
Shows doctors how computers see diseases in slides.
Pathology-Aware Prototype Evolution via LLM-Driven Semantic Disambiguation for Multicenter Diabetic Retinopathy Diagnosis
Artificial Intelligence
Helps doctors spot eye disease earlier and better.
MedEyes: Learning Dynamic Visual Focus for Medical Progressive Diagnosis
CV and Pattern Recognition
Helps doctors diagnose illnesses by looking at images.