Neural Optimal Design of Experiment for Inverse Problems
By: John E. Darges, Babak Maboudi Afkham, Matthias Chung
We introduce Neural Optimal Design of Experiments, a learning-based framework for optimal experimental design in inverse problems that avoids classical bilevel optimization and indirect sparsity regularization. NODE jointly trains a neural reconstruction model and a fixed-budget set of continuous design variables representing sensor locations, sampling times, or measurement angles, within a single optimization loop. By optimizing measurement locations directly rather than weighting a dense grid of candidates, the proposed approach enforces sparsity by design, eliminates the need for l1 tuning, and substantially reduces computational complexity. We validate NODE on an analytically tractable exponential growth benchmark, on MNIST image sampling, and illustrate its effectiveness on a real world sparse view X ray CT example. In all cases, NODE outperforms baseline approaches, demonstrating improved reconstruction accuracy and task-specific performance.
Similar Papers
Learning Generalizable Neural Operators for Inverse Problems
Machine Learning (CS)
Solves hard math problems by learning patterns.
Neural Ordinary Differential Equations for Simulating Metabolic Pathway Dynamics from Time-Series Multiomics Data
Machine Learning (CS)
Predicts how tiny cell parts will move and change.
Inverse Design in Nanophotonics via Representation Learning
Applied Physics
Designs tiny light-bending parts faster with AI.