Content-Aware Ad Banner Layout Generation with Two-Stage Chain-of-Thought in Vision Language Models
By: Kei Yoshitake , Kento Hosono , Ken Kobayashi and more
Potential Business Impact:
Creates better ads by understanding pictures.
In this paper, we propose a method for generating layouts for image-based advertisements by leveraging a Vision-Language Model (VLM). Conventional advertisement layout techniques have predominantly relied on saliency mapping to detect salient regions within a background image, but such approaches often fail to fully account for the image's detailed composition and semantic content. To overcome this limitation, our method harnesses a VLM to recognize the products and other elements depicted in the background and to inform the placement of text and logos. The proposed layout-generation pipeline consists of two steps. In the first step, the VLM analyzes the image to identify object types and their spatial relationships, then produces a text-based "placement plan" based on this analysis. In the second step, that plan is rendered into the final layout by generating HTML-format code. We validated the effectiveness of our approach through evaluation experiments, conducting both quantitative and qualitative comparisons against existing methods. The results demonstrate that by explicitly considering the background image's content, our method produces noticeably higher-quality advertisement layouts.
Similar Papers
Vision-Enhanced Large Language Models for High-Resolution Image Synthesis and Multimodal Data Interpretation
CV and Pattern Recognition
Makes computers create clearer pictures from words.
STER-VLM: Spatio-Temporal With Enhanced Reference Vision-Language Models
CV and Pattern Recognition
Helps self-driving cars understand traffic better.
Vision Large Language Models Are Good Noise Handlers in Engagement Analysis
CV and Pattern Recognition
Helps computers understand how interested people are.