TabiBERT: A Large-Scale ModernBERT Foundation Model and Unified Benchmarking Framework for Turkish
By: Melikşah Türker , A. Ebrar Kızıloğlu , Onur Güngör and more
Potential Business Impact:
Helps computers understand Turkish text much faster.
Since the inception of BERT, encoder-only Transformers have evolved significantly in computational efficiency, training stability, and long-context modeling. ModernBERT consolidates these advances by integrating Rotary Positional Embeddings (RoPE), FlashAttention, and refined normalization. Despite these developments, Turkish NLP lacks a monolingual encoder trained from scratch incorporating such modern architectural paradigms. This work introduces TabiBERT, a monolingual Turkish encoder based on ModernBERT architecture trained from scratch on a large, curated corpus. TabiBERT is pre-trained on one trillion tokens sampled from an 84.88B token multi-domain corpus: web text (73%), scientific publications (20%), source code (6%), and mathematical content (0.3%). The model supports 8,192-token context length (16x original BERT), achieves up to 2.65x inference speedup, and reduces GPU memory consumption, enabling larger batch sizes. We introduce TabiBench with 28 datasets across eight task categories with standardized splits and protocols, evaluated using GLUE-style macro-averaging. TabiBERT attains 77.58 on TabiBench, outperforming BERTurk by 1.62 points and establishing state-of-the-art on five of eight categories: question answering (+9.55), code retrieval (+2.41), and document retrieval (+0.60). Compared with task-specific prior best results, including specialized models like TurkishBERTweet, TabiBERT achieves +1.47 average improvement, indicating robust cross-domain generalization. We release model weights, training configurations, and evaluation code for transparent, reproducible Turkish encoder research.
Similar Papers
TurkColBERT: A Benchmark of Dense and Late-Interaction Models for Turkish Information Retrieval
Computation and Language
Finds Turkish information much faster with less data.
SindBERT, the Sailor: Charting the Seas of Turkish NLP
Computation and Language
Helps computers understand Turkish language better.
HalleluBERT: Let every token that has meaning bear its weight
Computation and Language
Makes computers understand Hebrew text much better.