Score: 0

CoMa: Contextual Massing Generation with Vision-Language Models

Published: January 13, 2026 | arXiv ID: 2601.08464v1

By: Evgenii Maslov , Valentin Khrulkov , Anastasia Volkova and more

Potential Business Impact:

Builds buildings faster using smart computer ideas.

Business Areas:
Image Recognition Data and Analytics, Software

The conceptual design phase in architecture and urban planning, particularly building massing, is complex and heavily reliant on designer intuition and manual effort. To address this, we propose an automated framework for generating building massing based on functional requirements and site context. A primary obstacle to such data-driven methods has been the lack of suitable datasets. Consequently, we introduce the CoMa-20K dataset, a comprehensive collection that includes detailed massing geometries, associated economical and programmatic data, and visual representations of the development site within its existing urban context. We benchmark this dataset by formulating massing generation as a conditional task for Vision-Language Models (VLMs), evaluating both fine-tuned and large zero-shot models. Our experiments reveal the inherent complexity of the task while demonstrating the potential of VLMs to produce context-sensitive massing options. The dataset and analysis establish a foundational benchmark and highlight significant opportunities for future research in data-driven architectural design.

Page Count
9 pages

Category
Computer Science:
CV and Pattern Recognition