Score: 2

Jina-VLM: Small Multilingual Vision Language Model

Published: December 3, 2025 | arXiv ID: 2512.04032v1

By: Andreas Koukounas , Georgios Mastrapas , Florian Hönicke and more

Potential Business Impact:

Lets computers understand pictures and answer questions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We present Jina-VLM, a 2.4B parameter vision-language model that achieves state-of-the-art multilingual visual question answering among open 2B-scale VLMs. The model couples a SigLIP2 vision encoder with a Qwen3 language backbone through an attention-pooling connector that enables token-efficient processing of arbitrary-resolution images. Across standard VQA benchmarks and multilingual evaluations, Jina-VLM outperforms comparable models while preserving competitive text-only performance.

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Computation and Language