Memorizing Long-tail Data Can Help Generalization Through Composition
By: Mo Zhou, Haoyang Ma, Rong Ge
Potential Business Impact:
Helps computers learn rare things by remembering.
Deep learning has led researchers to rethink the relationship between memorization and generalization. In many settings, memorization does not hurt generalization due to implicit regularization and may help by memorizing long-tailed examples. In this paper, we consider the synergy between memorization and simple composition -- the ability to make correct prediction on a combination of long-tailed features. Theoretically, we show that for a linear setting, memorization together with composition can help the model make correct predictions on rare test examples that require a combination of long-tailed features, even if such combinations were never observed in the training data. Experiments on neural network architecture on simple data show that the theoretical insight extends beyond the linear setting, and we further observe that the composition capability of the model depends on its architecture.
Similar Papers
Trustworthy Machine Learning via Memorization and the Granular Long-Tail: A Survey on Interactions, Tradeoffs, and Beyond
Machine Learning (CS)
Teaches computers to remember good and bad data.
Too Big to Think: Capacity, Memorization, and Generalization in Pre-Trained Transformers
Machine Learning (CS)
Makes computers remember facts or solve new problems.
Learning by Analogy: A Causal Framework for Composition Generalization
Machine Learning (CS)
Lets computers understand new ideas by breaking them down.