Score: 0

Rethinking generative image pretraining: How far are we from scaling up next-pixel prediction?

Published: November 11, 2025 | arXiv ID: 2511.08704v1

By: Xinchen Yan , Chen Liang , Lijun Yu and more

Potential Business Impact:

Computers will soon draw pictures by guessing each dot.

Business Areas:
Image Recognition Data and Analytics, Software

This paper investigates the scaling properties of autoregressive next-pixel prediction, a simple, end-to-end yet under-explored framework for unified vision models. Starting with images at resolutions of 32x32, we train a family of Transformers using IsoFlops profiles across compute budgets up to 7e19 FLOPs and evaluate three distinct target metrics: next-pixel prediction objective, ImageNet classification accuracy, and generation quality measured by Fr'echet Distance. First, optimal scaling strategy is critically task-dependent. At a fixed 32x32 resolution alone, the optimal scaling properties for image classification and image generation diverge, where generation optimal setup requires the data size grow three to five times faster than for the classification optimal setup. Second, as image resolution increases, the optimal scaling strategy indicates that the model size must grow much faster than data size. Surprisingly, by projecting our findings, we discover that the primary bottleneck is compute rather than the amount of training data. As compute continues to grow four to five times annually, we forecast the feasibility of pixel-by-pixel modeling of images within the next five years.

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition