Score: 1

OmniAlpha: A Sequence-to-Sequence Framework for Unified Multi-Task RGBA Generation

Published: November 25, 2025 | arXiv ID: 2511.20211v1

By: Hao Yu , Jiabo Zhan , Zile Wang and more

Potential Business Impact:

Creates images with transparent parts, like cutouts.

Business Areas:
Image Recognition Data and Analytics, Software

Generative models have excelled in RGB synthesis, but real-world applications require RGBA manipulation. This has led to a fragmented landscape: specialized, single-task models handle alpha but lack versatility, while unified multi-task frameworks are confined to the RGB domain. To bridge this critical gap, we propose OmniAlpha, the first unified, multi-task generative framework for sequence-to-sequence RGBA image generation and editing. Its architecture features MSRoPE-BiL, a novel RoPE method with a bi-directionally extendable layer axis for its Diffusion Transformer (DiT) backbone, enabling the concurrent processing of multiple input and target RGBA layers. To power this framework, we introduce AlphaLayers, a new dataset of 1,000 high-quality, multi-layer triplets, built via a novel automated synthesis and filter pipeline. Jointly training OmniAlpha on this dataset across a comprehensive suite of 21 diverse tasks, extensive experiments demonstrate that our unified approach consistently outperforms strong, specialized baselines. Most notably, OmniAlpha achieves a dramatic 84.8% relative reduction in SAD for mask-free matting on AIM-500 and wins over 90% of human preferences in layer-conditioned completion. Our work proves that a unified, multi-task model can learn a superior shared representation for RGBA, paving the way for more powerful, layer-aware generative systems.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
54 pages

Category
Computer Science:
CV and Pattern Recognition