Improving OCR using internal document redundancy
By: Diego Belzarena , Seginus Mowlavi , Aitor Artola and more
Potential Business Impact:
Fixes blurry text in old documents.
Current OCR systems are based on deep learning models trained on large amounts of data. Although they have shown some ability to generalize to unseen data, especially in detection tasks, they can struggle with recognizing low-quality data. This is particularly evident for printed documents, where intra-domain data variability is typically low, but inter-domain data variability is high. In that context, current OCR methods do not fully exploit each document's redundancy. We propose an unsupervised method by leveraging the redundancy of character shapes within a document to correct imperfect outputs of a given OCR system and suggest better clustering. To this aim, we introduce an extended Gaussian Mixture Model (GMM) by alternating an Expectation-Maximization (EM) algorithm with an intra-cluster realignment process and normality statistical testing. We demonstrate improvements in documents with various levels of degradation, including recovered Uruguayan military archives and 17th to mid-20th century European newspapers.
Similar Papers
Improving OCR for Historical Texts of Multiple Languages
CV and Pattern Recognition
Helps read old, messy handwriting and documents.
MonkeyOCR v1.5 Technical Report: Unlocking Robust Document Parsing for Complex Patterns
CV and Pattern Recognition
Reads messy, complex documents perfectly.
Compact Multimodal Language Models as Robust OCR Alternatives for Noisy Textual Clinical Reports
Information Retrieval
Reads messy doctor notes from phone pictures.