EmbeddingGemma: Powerful and Lightweight Text Representations
By: Henrique Schechter Vera , Sahil Dua , Biao Zhang and more
Potential Business Impact:
Helps computers understand words better, faster, and cheaper.
We introduce EmbeddingGemma, a new lightweight, open text embedding model based on the Gemma 3 language model family. Our innovative training recipe strategically captures knowledge from larger models via encoder-decoder initialization and geometric embedding distillation. We improve model robustness and expressiveness with a spread-out regularizer, and ensure generalizability by merging checkpoints from varied, optimized mixtures. Evaluated on the Massive Text Embedding Benchmark (MTEB) across multilingual, English, and code domains, EmbeddingGemma (300M) achieves state-of-the-art results. Notably, it outperforms prior top models, both proprietary and open, with fewer than 500M parameters, and provides performance comparable to models double its size, offering an exceptional performance-to-cost ratio. Remarkably, this lead persists when quantizing model weights or truncating embedding outputs. This makes EmbeddingGemma particularly well-suited for low-latency and high-throughput use cases such as on-device applications. We provide ablation studies exploring our key design choices. We release EmbeddingGemma to the community to promote further research.
Similar Papers
Gemini Embedding: Generalizable Embeddings from Gemini
Computation and Language
Helps computers understand many languages and code.
Gemma 3 Technical Report
Computation and Language
Lets computers understand pictures and many languages.
GEM: Empowering LLM for both Embedding Generation and Language Understanding
Computation and Language
Lets computers understand text better for answers.