Score: 2

GigaEmbeddings: Efficient Russian Language Embedding Model

Published: October 25, 2025 | arXiv ID: 2510.22369v1

By: Egor Kolodin , Daria Khomich , Nikita Savushkin and more

Potential Business Impact:

Helps computers understand Russian text better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We introduce GigaEmbeddings, a novel framework for training high-performance Russian-focused text embeddings through hierarchical instruction tuning of the decoder-only LLM designed specifically for Russian language (GigaChat-3B). Our three-stage pipeline, comprising large-scale contrastive pre-training in web-scale corpora, fine-tuning with hard negatives, and multitask generalization across retrieval, classification, and clustering tasks, addresses key limitations of existing methods by unifying diverse objectives and leveraging synthetic data generation. Architectural innovations include bidirectional attention for contextual modeling, latent attention pooling for robust sequence aggregation, and strategic pruning of 25% of transformer layers to enhance efficiency without compromising performance. Evaluated on the ruMTEB benchmark spanning 23 multilingual tasks, GigaEmbeddings achieves state-of-the-art results (69.1 avg. score), outperforming strong baselines with a larger number of parameters.

Country of Origin
🇷🇺 Russian Federation

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Computation and Language