Score: 3

Sketch-to-Layout: Sketch-Guided Multimodal Layout Generation

Published: October 31, 2025 | arXiv ID: 2510.27632v1

By: Riccardo Brioschi , Aleksandr Alekseev , Emanuele Nevali and more

BigTech Affiliations: Google

Potential Business Impact:

Draw a picture to design a page layout.

Business Areas:
Visual Search Internet Services

Graphic layout generation is a growing research area focusing on generating aesthetically pleasing layouts ranging from poster designs to documents. While recent research has explored ways to incorporate user constraints to guide the layout generation, these constraints often require complex specifications which reduce usability. We introduce an innovative approach exploiting user-provided sketches as intuitive constraints and we demonstrate empirically the effectiveness of this new guidance method, establishing the sketch-to-layout problem as a promising research direction, which is currently under-explored. To tackle the sketch-to-layout problem, we propose a multimodal transformer-based solution using the sketch and the content assets as inputs to produce high quality layouts. Since collecting sketch training data from human annotators to train our model is very costly, we introduce a novel and efficient method to synthetically generate training sketches at scale. We train and evaluate our model on three publicly available datasets: PubLayNet, DocLayNet and SlidesVQA, demonstrating that it outperforms state-of-the-art constraint-based methods, while offering a more intuitive design experience. In order to facilitate future sketch-to-layout research, we release O(200k) synthetically-generated sketches for the public datasets above. The datasets are available at https://github.com/google-deepmind/sketch_to_layout.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
28 pages

Category
Computer Science:
CV and Pattern Recognition