Realizability assumption

cosmos 19th December 2018 at 7:01pm
Learning theory

In general, this is the assumption that the set of predictors for an Estimator or Learning algorithm contains the Bayes estimator.

In the case of deterministic predictors, and zero Bayes risk, this is equivalent to: There exists h ∈ H (hypothesis in the Hypothesis class) s.t. L ( D , f ) (h ) = 0 (the Empirical error is zero). Note that this assumption implies that with probability 1 over ran- dom samples, S, where the instances of S are sampled according to D and are labeled by f , we have L S (h ) = 0.