Transition Models: Rethinking the Generative Learning Objective
By: Zidong Wang , Yiyuan Zhang , Xiaoyu Yue and more
Potential Business Impact:
Makes AI create better pictures faster.
A fundamental dilemma in generative modeling persists: iterative diffusion models achieve outstanding fidelity, but at a significant computational cost, while efficient few-step alternatives are constrained by a hard quality ceiling. This conflict between generation steps and output quality arises from restrictive training objectives that focus exclusively on either infinitesimal dynamics (PF-ODEs) or direct endpoint prediction. We address this challenge by introducing an exact, continuous-time dynamics equation that analytically defines state transitions across any finite time interval. This leads to a novel generative paradigm, Transition Models (TiM), which adapt to arbitrary-step transitions, seamlessly traversing the generative trajectory from single leaps to fine-grained refinement with more steps. Despite having only 865M parameters, TiM achieves state-of-the-art performance, surpassing leading models such as SD3.5 (8B parameters) and FLUX.1 (12B parameters) across all evaluated step counts. Importantly, unlike previous few-step generators, TiM demonstrates monotonic quality improvement as the sampling budget increases. Additionally, when employing our native-resolution strategy, TiM delivers exceptional fidelity at resolutions up to 4096x4096.
Similar Papers
Exploring the Design Space of Transition Matching
Machine Learning (CS)
Creates better AI art faster and more efficiently.
Offline Reinforcement Learning with Generative Trajectory Policies
Machine Learning (CS)
Makes robots learn tasks faster and better.
From Navigation to Refinement: Revealing the Two-Stage Nature of Flow-based Diffusion Models through Oracle Velocity
Machine Learning (CS)
Teaches computers to create realistic pictures and videos.