T-pro 2.0: An Efficient Russian Hybrid-Reasoning Model and Playground
By: Dmitrii Stoianov , Danil Taranets , Olga Tsymboi and more
Potential Business Impact:
Helps computers understand and answer Russian questions faster.
We introduce T-pro 2.0, an open-weight Russian LLM for hybrid reasoning and efficient inference. The model supports direct answering and reasoning-trace generation, using a Cyrillic-dense tokenizer and an adapted EAGLE speculative-decoding pipeline to reduce latency. To enable reproducible and extensible research, we release the model weights, the T-Wix 500k instruction corpus, the T-Math reasoning benchmark, and the EAGLE weights on Hugging Face. These resources allow users to study Russian-language reasoning and to extend or adapt both the model and the inference pipeline. A public web demo exposes reasoning and non-reasoning modes and illustrates the speedups achieved by our inference stack across domains. T-pro 2.0 thus serves as an accessible open system for building and evaluating efficient, practical Russian LLM applications.
Similar Papers
LexPro-1.0 Technical Report
Computation and Language
Helps lawyers understand complex Chinese laws.
Spark-Prover-X1: Formal Theorem Proving Through Diverse Data Training
Computation and Language
Helps computers prove math problems faster.
Spark-Prover-X1: Formal Theorem Proving Through Diverse Data Training
Computation and Language
Helps computers prove math problems faster.