mrCAD: Multimodal Refinement of Computer-aided Designs
By: William P. McCarthy , Saujas Vaduguru , Karl D. D. Willis and more
Potential Business Impact:
Teaches computers to change designs when told.
A key feature of human collaboration is the ability to iteratively refine the concepts we have communicated. In contrast, while generative AI excels at the \textit{generation} of content, it often struggles to make specific language-guided \textit{modifications} of its prior outputs. To bridge the gap between how humans and machines perform edits, we present mrCAD, a dataset of multimodal instructions in a communication game. In each game, players created computer aided designs (CADs) and refined them over several rounds to match specific target designs. Only one player, the Designer, could see the target, and they must instruct the other player, the Maker, using text, drawing, or a combination of modalities. mrCAD consists of 6,082 communication games, 15,163 instruction-execution rounds, played between 1,092 pairs of human players. We analyze the dataset and find that generation and refinement instructions differ in their composition of drawing and text. Using the mrCAD task as a benchmark, we find that state-of-the-art VLMs are better at following generation instructions than refinement instructions. These results lay a foundation for analyzing and modeling a multimodal language of refinement that is not represented in previous datasets.
Similar Papers
Toward AI-driven Multimodal Interfaces for Industrial CAD Modeling
Human-Computer Interaction
Helps designers build 3D models faster with AI.
From Idea to CAD: A Language Model-Driven Multi-Agent System for Collaborative Design
Artificial Intelligence
Computers design 3D models from your drawings.
From Intent to Execution: Multimodal Chain-of-Thought Reinforcement Learning for Precise CAD Code Generation
Machine Learning (CS)
Computer designs 3D shapes from simple words.