From learnable objects to learnable random objects
By: Aaron Anderson, Michael Benedikt
Potential Business Impact:
Teaches computers to learn from fewer examples.
We consider the relationship between learnability of a "base class" of functions on a set $X$, and learnability of a class of statistical functions derived from the base class. For example, we refine results showing that learnability of a family $h_p: p \in Y$ of functions implies learnability of the family of functions $h_\mu=\lambda p: Y. E_\mu(h_p)$, where $E_\mu$ is the expectation with respect to $\mu$, and $\mu$ ranges over probability distributions on $X$. We will look at both Probably Approximately Correct (PAC) learning, where example inputs and outputs are chosen at random, and online learning, where the examples are chosen adversarily. For agnostic learning, we establish improved bounds on the sample complexity of learning for statistical classes, stated in terms of combinatorial dimensions of the base class. We connect these problems to techniques introduced in model theory for "randomizing a structure". We also provide counterexamples for realizable learning, in both the PAC and online settings.
Similar Papers
Samplability makes learning easier
Computational Complexity
Makes computers learn more with less data.
Computational-Statistical Tradeoffs from NP-hardness
Computational Complexity
Makes computers learn harder things faster.
Recursively Enumerably Representable Classes and Computable Versions of the Fundamental Theorem of Statistical Learning
Machine Learning (CS)
Teaches computers to learn from data better.