Who is Afraid of Minimal Revision?
By: Edoardo Baccini , Zoé Christoff , Nina Gierasimczuk and more
Potential Business Impact:
Helps computers learn new things without forgetting old ones.
The principle of minimal change in belief revision theory requires that, when accepting new information, one keeps one's belief state as close to the initial belief state as possible. This is precisely what the method known as minimal revision does. However, unlike less conservative belief revision methods, minimal revision falls short in learning power: It cannot learn everything that can be learned by other learning methods. We begin by showing that, despite this limitation, minimal revision is still a successful learning method in a wide range of situations. Firstly, it can learn any problem that is finitely identifiable. Secondly, it can learn with positive and negative data, as long as one considers finitely many possibilities. We then characterize the prior plausibility assignments (over finitely many possibilities) that enable one to learn via minimal revision, and do the same for conditioning and lexicographic upgrade. Finally, we show that not all of our results still hold when learning from possibly erroneous information.
Similar Papers
How Do People Revise Inconsistent Beliefs? Examining Belief Revision in Humans with User Studies
Artificial Intelligence
Helps computers learn how people change their minds.
Iterated belief revision: from postulates to abilities
Artificial Intelligence
Helps computers learn and change beliefs.
On Definite Iterated Belief Revision with Belief Algebras
Artificial Intelligence
Makes computers learn and change their minds predictably.