Enhanced Photonic Chip Design via Interpretable Machine Learning Techniques
By: Lirandë Pira , Airin Antony , Nayanthara Prathap and more
Potential Business Impact:
Makes computer chips work better by understanding their design.
Photonic chip design has seen significant advancements with the adoption of inverse design methodologies, offering flexibility and efficiency in optimizing device performance. However, the black-box nature of the optimization approaches, such as those used in inverse design in order to minimize a loss function or maximize coupling efficiency, poses challenges in understanding the outputs. This challenge is prevalent in machine learning-based optimization methods, which can suffer from the same lack of transparency. To this end, interpretability techniques address the opacity of optimization models. In this work, we apply interpretability techniques from machine learning, with the aim of gaining understanding of inverse design optimization used in designing photonic components, specifically two-mode multiplexers. We base our methodology on the widespread interpretability technique known as local interpretable model-agnostic explanations, or LIME. As a result, LIME-informed insights point us to more effective initial conditions, directly improving device performance. This demonstrates that interpretability methods can do more than explain models -- they can actively guide and enhance the inverse-designed photonic components. Our results demonstrate the ability of interpretable techniques to reveal underlying patterns in the inverse design process, leading to the development of better-performing components.
Similar Papers
Inverse Design in Nanophotonics via Representation Learning
Applied Physics
Designs tiny light-bending parts faster with AI.
Interpretability as Alignment: Making Internal Understanding a Design Principle
Machine Learning (CS)
Makes AI understandable and safe for people.
Green LIME: Improving AI Explainability through Design of Experiments
Machine Learning (Stat)
Makes AI explain itself faster and cheaper.