Jina-VLM: Small Multilingual Vision Language Model
By: Andreas Koukounas , Georgios Mastrapas , Florian Hönicke and more
Potential Business Impact:
Lets computers understand pictures and answer questions.
We present Jina-VLM, a 2.4B parameter vision-language model that achieves state-of-the-art multilingual visual question answering among open 2B-scale VLMs. The model couples a SigLIP2 vision encoder with a Qwen3 language backbone through an attention-pooling connector that enables token-efficient processing of arbitrary-resolution images. Across standard VQA benchmarks and multilingual evaluations, Jina-VLM outperforms comparable models while preserving competitive text-only performance.
Similar Papers
Jina-VLM: Small Multilingual Vision Language Model
Computation and Language
Lets computers answer questions about pictures.
ViExam: Are Vision Language Models Better than Humans on Vietnamese Multimodal Exam Questions?
Computation and Language
Helps computers understand Vietnamese school tests.
A Survey on Efficient Vision-Language Models
CV and Pattern Recognition
Makes smart AI work on small, slow devices.