Score: 1

Who is Afraid of Minimal Revision?

Published: November 27, 2025 | arXiv ID: 2511.22386v1

By: Edoardo Baccini , Zoé Christoff , Nina Gierasimczuk and more

Potential Business Impact:

Helps computers learn new things without forgetting old ones.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The principle of minimal change in belief revision theory requires that, when accepting new information, one keeps one's belief state as close to the initial belief state as possible. This is precisely what the method known as minimal revision does. However, unlike less conservative belief revision methods, minimal revision falls short in learning power: It cannot learn everything that can be learned by other learning methods. We begin by showing that, despite this limitation, minimal revision is still a successful learning method in a wide range of situations. Firstly, it can learn any problem that is finitely identifiable. Secondly, it can learn with positive and negative data, as long as one considers finitely many possibilities. We then characterize the prior plausibility assignments (over finitely many possibilities) that enable one to learn via minimal revision, and do the same for conditioning and lexicographic upgrade. Finally, we show that not all of our results still hold when learning from possibly erroneous information.

Country of Origin
🇩🇰 🇳🇱 Netherlands, Denmark

Page Count
20 pages

Category
Computer Science:
Artificial Intelligence