Performance Comparisons of Reinforcement Learning Algorithms for Sequential Experimental Design
By: Yasir Zubayr Barlas, Kizito Salako
Potential Business Impact:
Teaches computers to pick the best science experiments.
Recent developments in sequential experimental design look to construct a policy that can efficiently navigate the design space, in a way that maximises the expected information gain. Whilst there is work on achieving tractable policies for experimental design problems, there is significantly less work on obtaining policies that are able to generalise well - i.e. able to give good performance despite a change in the underlying statistical properties of the experiments. Conducting experiments sequentially has recently brought about the use of reinforcement learning, where an agent is trained to navigate the design space to select the most informative designs for experimentation. However, there is still a lack of understanding about the benefits and drawbacks of using certain reinforcement learning algorithms to train these agents. In our work, we investigate several reinforcement learning algorithms and their efficacy in producing agents that take maximally informative design decisions in sequential experimental design scenarios. We find that agent performance is impacted depending on the algorithm used for training, and that particular algorithms, using dropout or ensemble approaches, empirically showcase attractive generalisation properties.
Similar Papers
Efficient Preference-Based Reinforcement Learning: Randomized Exploration Meets Experimental Design
Machine Learning (CS)
Teaches computers to learn from your choices.
Efficient Adaptation of Reinforcement Learning Agents to Sudden Environmental Change
Machine Learning (CS)
Helps robots learn new tricks without forgetting old ones.
Preference Optimization for Combinatorial Optimization Problems
Machine Learning (CS)
Teaches computers to solve hard puzzles better.