Deep reinforcement learning for efficient exploration of combinatorial structural design spaces
By: Chloe S. H. Hong, Keith J. Lee, Caitlin T. Mueller
Potential Business Impact:
Designs buildings faster and better using smart computer rules.
This paper proposes a reinforcement learning framework for performance-driven structural design that combines bottom-up design generation with learned strategies to efficiently search large combinatorial design spaces. Motivated by the limitations of conventional top-down approaches such as optimization, the framework instead models structures as compositions of predefined elements, aligning form finding with practical constraints like constructability and component reuse. With the formulation of the design task as a sequential decision-making problem and a human learning inspired training algorithm, the method adapts reinforcement learning for structural design. The framework is demonstrated by designing steel braced truss frame cantilever structures, where trained policies consistently generate distinct, high-performing designs that display structural performance and material efficiency with the use of structural strategies that align with known engineering principles. Further analysis shows that the agent efficiently narrows its search to promising regions of the design space, revealing transferable structural knowledge.
Similar Papers
Structured Reinforcement Learning for Combinatorial Decision-Making
Machine Learning (CS)
Helps computers make better choices in complex situations.
Reinforcement learning framework for the mechanical design of microelectronic components under multiphysics constraints
Computational Physics
Designs tiny computer parts faster and better.
Performance Comparisons of Reinforcement Learning Algorithms for Sequential Experimental Design
Machine Learning (CS)
Teaches computers to pick the best science experiments.