Active inference and artificial reasoning
By: Karl Friston , Lancelot Da Costa , Alexander Tschantz and more
This technical note considers the sampling of outcomes that provide the greatest amount of information about the structure of underlying world models. This generalisation furnishes a principled approach to structure learning under a plausible set of generative models or hypotheses. In active inference, policies - i.e., combinations of actions - are selected based on their expected free energy, which comprises expected information gain and value. Information gain corresponds to the KL divergence between predictive posteriors with, and without, the consequences of action. Posteriors over models can be evaluated quickly and efficiently using Bayesian Model Reduction, based upon accumulated posterior beliefs about model parameters. The ensuing information gain can then be used to select actions that disambiguate among alternative models, in the spirit of optimal experimental design. We illustrate this kind of active selection or reasoning using partially observed discrete models; namely, a 'three-ball' paradigm used previously to describe artificial insight and 'aha moments' via (synthetic) introspection or sleep. We focus on the sample efficiency afforded by seeking outcomes that resolve the greatest uncertainty about the world model, under which outcomes are generated.
Similar Papers
Active inference for action-unaware agents
Artificial Intelligence
Helps robots learn to move without knowing their own actions.
Active Inference for an Intelligent Agent in Autonomous Reconnaissance Missions
Artificial Intelligence
Helps robots explore and find things faster.
Simulating Biological Intelligence: Active Inference with Experiment-Informed Generative Model
Artificial Intelligence
Makes AI learn like a brain.