OpenThaiGPT 1.6 and R1: Thai-Centric Open Source and Reasoning Large Language Models
By: Sumeth Yuenyong, Thodsaporn Chay-intr, Kobkrit Viriyayudhakorn
Potential Business Impact:
Makes computers understand and talk Thai better.
We present OpenThaiGPT 1.6 and R1 (OTG-1.6 and OTG-R1), Thai-centric Large Language Models (LLMs) developed through distinct methodologies to enhance generalization and reasoning capabilities. OTG-1.6 employs Task Arithmetic model merging for broad generalization, while OTG-R1 integrates multi-stage training with the Less-Is-More Reasoning Hypothesis (LIMO) for advanced reasoning. Benchmark evaluations demonstrate superior performance across Thai language tasks, achieving competitive results against larger-scale open-source Thai LLMs. This paper details the proposed models, training processes, benchmarks, and results, highlighting improvements over previous models and establishing new performance standards for Thai-centric LLMs.
Similar Papers
OpenJAI-v1.0: An Open Thai Large Language Model
Computation and Language
Helps computers understand Thai and English better.
Learning to Reason: Training LLMs with GPT-OSS or DeepSeek R1 Reasoning Traces
Computation and Language
Teaches smaller computers to think like big ones.
ClinicalGPT-R1: Pushing reasoning capability of generalist disease diagnosis with large language model
Computation and Language
Helps doctors diagnose illnesses better using AI.