The Rademacher complexity of a set is
where are i.i.d. Random variables uniform in
See here for book chapter (UML), also notes on LectureNotes on tablet (ML-AI/Learning theory) and these notes and here
An important tool to bound the mean of a supremum of a random process is the Rademacher complexity.
See Learning theory
Expected Rademacher complexity bounds expected Representativeness of the training set.
where the representativeness is defined as:
This proof requires dividing the term inside the Rad comp into two i.i.d. training sets, changing the sign of the Rad variables, as the expectation is the same. We also need to use the fact that swapping i.i.d. variables (from the two training sets) leaves the expectation unchanged. As doing is turns out to be equivalent to changing the sign of the Rad vars, then the expectation over the training sets is independent of the Rad vars, and so we can ignore the expectation over them!
This allows to bound Generalization gaps both in expectation and with high probability (using the McDiarmid's inequality).
The first result is that via McDiarmid's concentration inequality is close to its expectation w.h.p over , so that if we can bound , we are bounding the worst-case difference between the true risk and the empirical risk, w.h.p.. To bound this we need to bound , but we don't know , so instead we show that w.h.p over training sets , is close to its expectation (via the same concentration inequality as above). then we can estimate the expectation from the that we get from our training set!. So it depends on the training set; however, the key is that it doesn't depend much on the particular draw , but it may depend significantly on the underlying distribution . This is similar to how bounds like that for Occam algorithms depend on the target funciton, or how other Data-dependent generalization bounds depend on the target distribution to allow bypassing the problem of distribution-free worst-case bounds. However, it is still worst-case over , and it doesn't allow for biased distribution over functions, like PAC-Bayes.
So for instance you could have a hypothesis class that has the vector of all s, and then all vectors of half s and half s. For random target functions, this would have high expected Rad comp and generalize badly, but for the function with all s it would have low expected Rad comp and generalize well. However, for the hypothesis class of all functions, Rad comp would be high for all targets!
I think there are probably some ways to solve this.. and incorporate some bias within . This would require making the bound depend on , like something based on Structural risk minimization, with a set of classes (typically nested) which have different Rad comp.
Paper with applications of Rademacher complexity to classes of margin classifiers: http://www.jmlr.org/papers/volume5/luxburg04b/luxburg04b.pdf
multiplying a set of vectors by a constant, and translating them by a constant vector (a kind of affine transformation) doesn't change the Rademacher complexity. This demonstrates the fact that Rademacher is not only about how well the vectors match the signs of the sigmas. They may only encode this if we limit the norm of the vectors (for e.g. when working with a bounded loss)
taking the convex hull of a set of vectors doesn't change the Rademacher complexity. This is because the suppremum over a set of a linear objective doesn't change when we take the sup over the convex hull of the set.
Rademacher complexity contraction lemma
Margin-based generalization bound
Together with margin-based, these are used in Deep learning theory