AndesVL Technical Report: An Efficient Mobile-side Multimodal Large Language Model
By: Zhiwei Jin , Xiaohui Song , Nan Wang and more
Potential Business Impact:
Lets phones understand pictures and text easily.
In recent years, while cloud-based MLLMs such as QwenVL, InternVL, GPT-4o, Gemini, and Claude Sonnet have demonstrated outstanding performance with enormous model sizes reaching hundreds of billions of parameters, they significantly surpass the limitations in memory, power consumption, and computing capacity of edge devices such as mobile phones. This paper introduces AndesVL, a suite of mobile-side MLLMs with 0.6B to 4B parameters based on Qwen3's LLM and various visual encoders. We comprehensively outline the model architectures, training pipeline, and training data of AndesVL, which achieves first-tier performance across a wide range of open-source benchmarks, including fields such as text-rich image understanding, reasoning and math, multi-image comprehension, general VQA, hallucination mitigation, multilingual understanding, and GUI-related tasks when compared with state-of-the-art models of a similar scale. Furthermore, we introduce a 1+N LoR
Similar Papers
AndesVL Technical Report: An Efficient Mobile-side Multimodal Large Language Model
CV and Pattern Recognition
Makes smart AI work on your phone.
HyperVL: An Efficient and Dynamic Multimodal Large Language Model for Edge Devices
CV and Pattern Recognition
Makes smart AI work on your phone.
MindVL: Towards Efficient and Effective Training of Multimodal Large Language Models on Ascend NPUs
CV and Pattern Recognition
Lets computers understand pictures and text better.