Score: 3

Demystifying Transition Matching: When and Why It Can Beat Flow Matching

Published: October 20, 2025 | arXiv ID: 2510.17991v1

By: Jaihoon Kim , Rajarshi Saha , Minhyuk Sung and more

BigTech Affiliations: Amazon

Potential Business Impact:

Makes AI create better pictures faster.

Business Areas:
A/B Testing Data and Analytics

Flow Matching (FM) underpins many state-of-the-art generative models, yet recent results indicate that Transition Matching (TM) can achieve higher quality with fewer sampling steps. This work answers the question of when and why TM outperforms FM. First, when the target is a unimodal Gaussian distribution, we prove that TM attains strictly lower KL divergence than FM for finite number of steps. The improvement arises from stochastic difference latent updates in TM, which preserve target covariance that deterministic FM underestimates. We then characterize convergence rates, showing that TM achieves faster convergence than FM under a fixed compute budget, establishing its advantage in the unimodal Gaussian setting. Second, we extend the analysis to Gaussian mixtures and identify local-unimodality regimes in which the sampling dynamics approximate the unimodal case, where TM can outperform FM. The approximation error decreases as the minimal distance between component means increases, highlighting that TM is favored when the modes are well separated. However, when the target variance approaches zero, each TM update converges to the FM update, and the performance advantage of TM diminishes. In summary, we show that TM outperforms FM when the target distribution has well-separated modes and non-negligible variances. We validate our theoretical results with controlled experiments on Gaussian distributions, and extend the comparison to real-world applications in image and video generation.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡°πŸ‡· Korea, Republic of, United States

Page Count
26 pages

Category
Computer Science:
Machine Learning (CS)