VARCO-VISION-2.0 Technical Report
By: Young-rok Cha , Jeongho Ju , SunYoung Park and more
Potential Business Impact:
Lets computers understand pictures and text together.
We introduce VARCO-VISION-2.0, an open-weight bilingual vision-language model (VLM) for Korean and English with improved capabilities compared to the previous model VARCO-VISION-14B. The model supports multi-image understanding for complex inputs such as documents, charts, and tables, and delivers layoutaware OCR by predicting both textual content and its spatial location. Trained with a four-stage curriculum with memory-efficient techniques, the model achieves enhanced multimodal alignment, while preserving core language abilities and improving safety via preference optimization. Extensive benchmark evaluations demonstrate strong spatial grounding and competitive results for both languages, with the 14B model achieving 8th place on the OpenCompass VLM leaderboard among models of comparable scale. Alongside the 14B-scale model, we release a 1.7B version optimized for on-device deployment. We believe these models advance the development of bilingual VLMs and their practical applications. Two variants of VARCO-VISION-2.0 are available at Hugging Face: a full-scale 14B model and a lightweight 1.7B model.
Similar Papers
VARCO-VISION-2.0 Technical Report
CV and Pattern Recognition
Lets computers understand pictures and words together.
Exploring OCR-augmented Generation for Bilingual VQA
CV and Pattern Recognition
Lets computers read and understand pictures with text.
PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language Model
CV and Pattern Recognition
Reads any document, even complex ones, fast.