GigaEmbeddings: Efficient Russian Language Embedding Model
By: Egor Kolodin , Daria Khomich , Nikita Savushkin and more
Potential Business Impact:
Helps computers understand Russian text better.
We introduce GigaEmbeddings, a novel framework for training high-performance Russian-focused text embeddings through hierarchical instruction tuning of the decoder-only LLM designed specifically for Russian language (GigaChat-3B). Our three-stage pipeline, comprising large-scale contrastive pre-training in web-scale corpora, fine-tuning with hard negatives, and multitask generalization across retrieval, classification, and clustering tasks, addresses key limitations of existing methods by unifying diverse objectives and leveraging synthetic data generation. Architectural innovations include bidirectional attention for contextual modeling, latent attention pooling for robust sequence aggregation, and strategic pruning of 25% of transformer layers to enhance efficiency without compromising performance. Evaluated on the ruMTEB benchmark spanning 23 multilingual tasks, GigaEmbeddings achieves state-of-the-art results (69.1 avg. score), outperforming strong baselines with a larger number of parameters.
Similar Papers
Evaluating Embedding Models and Pipeline Optimization for AI Search Quality
Information Retrieval
Makes AI search find information much better.
LGAI-EMBEDDING-Preview Technical Report
Computation and Language
Helps computers understand text for many jobs.
Gemini Embedding: Generalizable Embeddings from Gemini
Computation and Language
Helps computers understand many languages and code.