Score: 2

Learning to Reason: Training LLMs with GPT-OSS or DeepSeek R1 Reasoning Traces

Published: November 24, 2025 | arXiv ID: 2511.19333v1

By: Shaltiel Shmidman , Asher Fredman , Oleg Sudakov and more

BigTech Affiliations: NVIDIA

Potential Business Impact:

Teaches smaller computers to think like big ones.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Test-time scaling, which leverages additional computation during inference to improve model accuracy, has enabled a new class of Large Language Models (LLMs) that are able to reason through complex problems by understanding the goal, turning this goal into a plan, working through intermediate steps, and checking their own work before answering . Frontier large language models with reasoning capabilities, such as DeepSeek-R1 and OpenAI's gpt-oss, follow the same procedure when solving complex problems by generating intermediate reasoning traces before giving the final answer. Today, these models are being increasingly used to generate reasoning traces that serve as high-quality supervised data for post-training of small and medium-sized language models to teach reasoning capabilities without requiring expensive human curation. In this work, we compare the performance of medium-sized LLMs on Math problems after post-training on two kinds of reasoning traces. We compare the impact of reasoning traces generated by DeepSeek-R1 and gpt-oss LLMs in terms of accuracy and inference efficiency.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Computation and Language