Score: 1

A Review of DeepSeek Models' Key Innovative Techniques

Published: March 14, 2025 | arXiv ID: 2503.11486v1

By: Chengen Wang, Murat Kantarcioglu

Potential Business Impact:

Makes smart computer programs better and cheaper.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

DeepSeek-V3 and DeepSeek-R1 are leading open-source Large Language Models (LLMs) for general-purpose tasks and reasoning, achieving performance comparable to state-of-the-art closed-source models from companies like OpenAI and Anthropic -- while requiring only a fraction of their training costs. Understanding the key innovative techniques behind DeepSeek's success is crucial for advancing LLM research. In this paper, we review the core techniques driving the remarkable effectiveness and efficiency of these models, including refinements to the transformer architecture, innovations such as Multi-Head Latent Attention and Mixture of Experts, Multi-Token Prediction, the co-design of algorithms, frameworks, and hardware, the Group Relative Policy Optimization algorithm, post-training with pure reinforcement learning and iterative training alternating between supervised fine-tuning and reinforcement learning. Additionally, we identify several open questions and highlight potential research opportunities in this rapidly advancing field.

Country of Origin
🇺🇸 United States

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)