Compact Multimodal Language Models as Robust OCR Alternatives for Noisy Textual Clinical Reports
By: Nikita Neveditsin , Pawan Lingras , Salil Patil and more
Potential Business Impact:
Reads messy doctor notes from phone pictures.
Digitization of medical records often relies on smartphone photographs of printed reports, producing images degraded by blur, shadows, and other noise. Conventional OCR systems, optimized for clean scans, perform poorly under such real-world conditions. This study evaluates compact multimodal language models as privacy-preserving alternatives for transcribing noisy clinical documents. Using obstetric ultrasound reports written in regionally inflected medical English common to Indian healthcare settings, we compare eight systems in terms of transcription accuracy, noise sensitivity, numeric accuracy, and computational efficiency. Compact multimodal models consistently outperform both classical and neural OCR pipelines. Despite higher computational costs, their robustness and linguistic adaptability position them as viable candidates for on-premises healthcare digitization.
Similar Papers
Multi-Stage Field Extraction of Financial Documents with OCR and Compact Vision-Language Models
Information Retrieval
Reads messy business papers faster and better.
A Multimodal Pipeline for Clinical Data Extraction: Applying Vision-Language Models to Scans of Transfusion Reaction Reports
Computation and Language
Reads checkboxes on paper forms automatically.
Ultrasound Report Generation with Multimodal Large Language Models for Standardized Texts
Image and Video Processing
Writes doctor reports for ultrasound pictures.