Exploring OCR-augmented Generation for Bilingual VQA
By: JoonHo Lee, Sunho Park
Potential Business Impact:
Lets computers read and understand pictures with text.
We investigate OCR-augmented generation with Vision Language Models (VLMs), exploring tasks in Korean and English toward multilingualism. To support research in this domain, we train and release KLOCR, a strong bilingual OCR baseline trained on 100M instances to augment VLMs with OCR ability. To complement existing VQA benchmarks, we curate KOCRBench for Korean VQA, and analyze different prompting methods. Extensive experiments show that OCR-extracted text significantly boosts performance across open source and commercial models. Our work offers new insights into OCR-augmented generation for bilingual VQA. Model, code, and data are available at https://github.com/JHLee0513/KLOCR.
Similar Papers
ThaiOCRBench: A Task-Diverse Benchmark for Vision-Language Understanding in Thai
Computation and Language
Helps computers understand Thai documents better.
ThaiOCRBench: A Task-Diverse Benchmark for Vision-Language Understanding in Thai
Computation and Language
Helps computers understand Thai documents better.
VARCO-VISION-2.0 Technical Report
CV and Pattern Recognition
Lets computers understand pictures and text together.