Score: 1

Automated Glaucoma Report Generation via Dual-Attention Semantic Parallel-LSTM and Multimodal Clinical Data Integration

Published: October 11, 2025 | arXiv ID: 2510.10037v1

By: Cheng Huang , Weizheng Xie , Zeyu Han and more

Potential Business Impact:

Helps doctors find eye disease faster.

Business Areas:
Image Recognition Data and Analytics, Software

Generative AI for automated glaucoma diagnostic report generation faces two predominant challenges: content redundancy in narrative outputs and inadequate highlighting of pathologically significant features including optic disc cupping, retinal nerve fiber layer defects, and visual field abnormalities. These limitations primarily stem from current multimodal architectures' insufficient capacity to extract discriminative structural-textural patterns from fundus imaging data while maintaining precise semantic alignment with domain-specific terminology in comprehensive clinical reports. To overcome these constraints, we present the Dual-Attention Semantic Parallel-LSTM Network (DA-SPL), an advanced multimodal generation framework that synergistically processes both fundus imaging and supplementary visual inputs. DA-SPL employs an Encoder-Decoder structure augmented with the novel joint dual-attention mechanism in the encoder for cross-modal feature refinement, the parallelized LSTM decoder architecture for enhanced temporal-semantic consistency, and the specialized label enhancement module for accurate disease-relevant term generation. Rigorous evaluation on standard glaucoma datasets demonstrates DA-SPL's consistent superiority over state-of-the-art models across quantitative metrics. DA-SPL exhibits exceptional capability in extracting subtle pathological indicators from multimodal inputs while generating diagnostically precise reports that exhibit strong concordance with clinical expert annotations.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
Computational Engineering, Finance, and Science