NeoDictaBERT: Pushing the Frontier of BERT models for Hebrew
By: Shaltiel Shmidman, Avi Shmidman, Moshe Koppel
Potential Business Impact:
Helps computers understand Hebrew text better.
Since their initial release, BERT models have demonstrated exceptional performance on a variety of tasks, despite their relatively small size (BERT-base has ~100M parameters). Nevertheless, the architectural choices used in these models are outdated compared to newer transformer-based models such as Llama3 and Qwen3. In recent months, several architectures have been proposed to close this gap. ModernBERT and NeoBERT both show strong improvements on English benchmarks and significantly extend the supported context window. Following their successes, we introduce NeoDictaBERT and NeoDictaBERT-bilingual: BERT-style models trained using the same architecture as NeoBERT, with a dedicated focus on Hebrew texts. These models outperform existing ones on almost all Hebrew benchmarks and provide a strong foundation for downstream tasks. Notably, the NeoDictaBERT-bilingual model shows strong results on retrieval tasks, outperforming other multilingual models of similar size. In this paper, we describe the training process and report results across various benchmarks. We release the models to the community as part of our goal to advance research and development in Hebrew NLP.
Similar Papers
HalleluBERT: Let every token that has meaning bear its weight
Computation and Language
Makes computers understand Hebrew text much better.
BERnaT: Basque Encoders for Representing Natural Textual Diversity
Computation and Language
Teaches computers all kinds of language, not just perfect words.
GeistBERT: Breathing Life into German NLP
Computation and Language
Helps computers understand German better.