Encoder-Decoder or Decoder-Only? Revisiting Encoder-Decoder Large Language Model
By: Biao Zhang , Yong Cheng , Siamak Shakeri and more
Potential Business Impact:
Makes AI smarter and faster to use.
Recent large language model (LLM) research has undergone an architectural shift from encoder-decoder modeling to nowadays the dominant decoder-only modeling. This rapid transition, however, comes without a rigorous comparative analysis especially \textit{from the scaling perspective}, raising concerns that the potential of encoder-decoder models may have been overlooked. To fill this gap, we revisit encoder-decoder LLM (RedLLM), enhancing it with recent recipes from decoder-only LLM (DecLLM). We conduct a comprehensive comparison between RedLLM, pretrained with prefix language modeling (LM), and DecLLM, pretrained with causal LM, at different model scales, ranging from $\sim$150M to $\sim$8B. Using RedPajama V1 (1.6T tokens) for pretraining and FLAN for instruction tuning, our experiments show that RedLLM produces compelling scaling properties and surprisingly strong performance. While DecLLM is overall more compute-optimal during pretraining, RedLLM demonstrates comparable scaling and context length extrapolation capabilities. After instruction tuning, RedLLM achieves comparable and even better results on various downstream tasks while enjoying substantially better inference efficiency. We hope our findings could inspire more efforts on re-examining RedLLM, unlocking its potential for developing powerful and efficient LLMs.
Similar Papers
Beyond Decoder-only: Large Language Models Can be Good Encoders for Machine Translation
Computation and Language
Makes computer translation faster and uses less memory.
Encoder-Decoder Gemma: Improving the Quality-Efficiency Trade-Off via Adaptation
Computation and Language
Makes smart computer programs work better and faster.
Utilizing Multilingual Encoders to Improve Large Language Models for Low-Resource Languages
Computation and Language
Helps computers understand many languages better.