Training LLMs on HPC Systems: Best Practices from the OpenGPT-X Project
By: Carolin Penke , Chelsea Maria John , Jan Ebert and more
Potential Business Impact:
Builds better computer brains for many languages.
The training of large language models (LLMs) requires substantial computational resources, complex software stacks, and carefully designed workflows to achieve scalability and efficiency. This report presents best practices and insights gained from the OpenGPT-X project, a German initiative focused on developing open, multilingual LLMs optimized for European languages. We detail the use of high-performance computing (HPC) systems, primarily JUWELS Booster at JSC, for training Teuken-7B, a 7-billion-parameter transformer model. The report covers system architecture, training infrastructure, software choices, profiling and benchmarking tools, as well as engineering and operational challenges.
Similar Papers
ResearchGPT: Benchmarking and Training LLMs for End-to-End Computer Science Research Workflows
Machine Learning (CS)
AI helps scientists do research faster and better.
Comparison of Large Language Models for Deployment Requirements
Computation and Language
Helps pick the best AI for your needs.
Sustainability via LLM Right-sizing
Computation and Language
Finds AI that works well without costing too much.