Score: 0

Compact Multimodal Language Models as Robust OCR Alternatives for Noisy Textual Clinical Reports

Published: November 17, 2025 | arXiv ID: 2511.13523v1

By: Nikita Neveditsin , Pawan Lingras , Salil Patil and more

Potential Business Impact:

Reads messy doctor notes from phone pictures.

Business Areas:
Document Management Information Technology, Software

Digitization of medical records often relies on smartphone photographs of printed reports, producing images degraded by blur, shadows, and other noise. Conventional OCR systems, optimized for clean scans, perform poorly under such real-world conditions. This study evaluates compact multimodal language models as privacy-preserving alternatives for transcribing noisy clinical documents. Using obstetric ultrasound reports written in regionally inflected medical English common to Indian healthcare settings, we compare eight systems in terms of transcription accuracy, noise sensitivity, numeric accuracy, and computational efficiency. Compact multimodal models consistently outperform both classical and neural OCR pipelines. Despite higher computational costs, their robustness and linguistic adaptability position them as viable candidates for on-premises healthcare digitization.

Country of Origin
🇨🇦 Canada

Page Count
12 pages

Category
Computer Science:
Information Retrieval