Score: 2

Language-Agnostic Visual Embeddings for Cross-Script Handwriting Retrieval

Published: January 16, 2026 | arXiv ID: 2601.11248v1

By: Fangke Chen , Tianhao Dong , Sirry Chen and more

Potential Business Impact:

Lets computers read any handwriting, anywhere.

Business Areas:
Image Recognition Data and Analytics, Software

Handwritten word retrieval is vital for digital archives but remains challenging due to large handwriting variability and cross-lingual semantic gaps. While large vision-language models offer potential solutions, their prohibitive computational costs hinder practical edge deployment. To address this, we propose a lightweight asymmetric dual-encoder framework that learns unified, style-invariant visual embeddings. By jointly optimizing instance-level alignment and class-level semantic consistency, our approach anchors visual embeddings to language-agnostic semantic prototypes, enforcing invariance across scripts and writing styles. Experiments show that our method outperforms 28 baselines and achieves state-of-the-art accuracy on within-language retrieval benchmarks. We further conduct explicit cross-lingual retrieval, where the query language differs from the target language, to validate the effectiveness of the learned cross-lingual representations. Achieving strong performance with only a fraction of the parameters required by existing models, our framework enables accurate and resource-efficient cross-script handwriting retrieval.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΈπŸ‡¬ China, Singapore

Page Count
9 pages

Category
Computer Science:
CV and Pattern Recognition