Adapting Decoder-Based Language Models for Diverse Encoder Downstream Tasks
By: Paul Suganthan , Fedor Moiseev , Le Yan and more
Potential Business Impact:
Makes smart computer programs better at understanding text.
Decoder-based transformers, while revolutionizing language modeling and scaling to immense sizes, have not completely overtaken encoder-heavy architectures in natural language processing. Specifically, encoder-only models remain dominant in tasks like classification, regression, and ranking. This is primarily due to the inherent structure of decoder-based models, which limits their direct applicability to these tasks. In this paper, we introduce Gemma Encoder, adapting the powerful Gemma decoder model to an encoder architecture, thereby unlocking its potential for a wider range of non-generative applications. To optimize the adaptation from decoder to encoder, we systematically analyze various pooling strategies, attention mechanisms, and hyperparameters (e.g., dropout rate). Furthermore, we benchmark Gemma Encoder against established approaches on the GLUE benchmarks, and MS MARCO ranking benchmark, demonstrating its effectiveness and versatility.
Similar Papers
Encoder-Decoder Gemma: Improving the Quality-Efficiency Trade-Off via Adaptation
Computation and Language
Makes smart computer programs work better and faster.
T5Gemma 2: Seeing, Reading, and Understanding Longer
Computation and Language
Helps computers understand pictures and many languages.
Beyond Decoder-only: Large Language Models Can be Good Encoders for Machine Translation
Computation and Language
Makes computer translation faster and uses less memory.