CoMa: Contextual Massing Generation with Vision-Language Models
By: Evgenii Maslov , Valentin Khrulkov , Anastasia Volkova and more
Potential Business Impact:
Builds buildings faster using smart computer ideas.
The conceptual design phase in architecture and urban planning, particularly building massing, is complex and heavily reliant on designer intuition and manual effort. To address this, we propose an automated framework for generating building massing based on functional requirements and site context. A primary obstacle to such data-driven methods has been the lack of suitable datasets. Consequently, we introduce the CoMa-20K dataset, a comprehensive collection that includes detailed massing geometries, associated economical and programmatic data, and visual representations of the development site within its existing urban context. We benchmark this dataset by formulating massing generation as a conditional task for Vision-Language Models (VLMs), evaluating both fine-tuned and large zero-shot models. Our experiments reveal the inherent complexity of the task while demonstrating the potential of VLMs to produce context-sensitive massing options. The dataset and analysis establish a foundational benchmark and highlight significant opportunities for future research in data-driven architectural design.
Similar Papers
Let Language Constrain Geometry: Vision-Language Models as Semantic and Spatial Critics for 3D Generation
CV and Pattern Recognition
Makes 3D pictures match words better.
Towards General Urban Monitoring with Vision-Language Models: A Review, Evaluation, and a Research Agenda
CV and Pattern Recognition
Lets computers see city problems like people.
Coding the Visual World: From Image to Simulation Using Vision Language Models
CV and Pattern Recognition
Computers can now draw pictures from descriptions.