Accelerating Detailed Routing Convergence through Offline Reinforcement Learning
By: Afsara Khan, Austin Rovinski
Potential Business Impact:
Teaches computers to design computer chips faster.
Detailed routing remains one of the most complex and time-consuming steps in modern physical design due to the challenges posed by shrinking feature sizes and stricter design rules. Prior detailed routers achieve state-of-the-art results by leveraging iterative pathfinding algorithms to route each net. However, runtimes are a major issue in detailed routers, as converging to a solution with zero design rule violations (DRVs) can be prohibitively expensive. In this paper, we propose leveraging reinforcement learning (RL) to enable rapid convergence in detailed routing by learning from previous designs. We make the key observation that prior detailed routers statically schedule the cost weights used in their routing algorithms, meaning they do not change in response to the design or technology. By training a conservative Q-learning (CQL) model to dynamically select the routing cost weights which minimize the number of algorithm iterations, we find that our work completes the ISPD19 benchmarks with 1.56x average and up to 3.01x faster runtime than the baseline router while maintaining or improving the DRV count in all cases. We also find that this learning shows signs of generalization across technologies, meaning that learning designs in one technology can translate to improved outcomes in other technologies.
Similar Papers
Deep Reinforcement Learning for Multi-flow Routing in Heterogeneous Wireless Networks
Signal Processing
Helps devices pick best path for faster data.
HierRouter: Coordinated Routing of Specialized Large Language Models via Reinforcement Learning
Computation and Language
Makes smart computer programs run faster and cheaper.
Vehicle Routing Problems via Quantum Graph Attention Network Deep Reinforcement Learning
Machine Learning (CS)
Finds best delivery routes using quantum computers.