Domain Generalization: A Tale of Two ERMs
By: Yilun Zhu , Naihao Deng , Naichen Shi and more
Potential Business Impact:
Helps computers learn from different examples better.
Domain generalization (DG) is the problem of generalizing from several distributions (or domains), for which labeled training data are available, to a new test domain for which no labeled data is available. A common finding in the DG literature is that it is difficult to outperform empirical risk minimization (ERM) on the pooled training data. In this work, we argue that this finding has primarily been reported for datasets satisfying a \emph{covariate shift} assumption. When the dataset satisfies a \emph{posterior drift} assumption instead, we show that ``domain-informed ERM,'' wherein feature vectors are augmented with domain-specific information, outperforms pooling ERM. These claims are supported by a theoretical framework and experiments on language and vision tasks.
Similar Papers
Single Domain Generalization with Adversarial Memory
Machine Learning (CS)
Teaches computers to work with new data.
Effect of Domain Generalization Techniques in Low Resource Systems
Computation and Language
Makes computer language learning work better everywhere.
Generative Classifier for Domain Generalization
CV and Pattern Recognition
Teaches computers to see better in new places.