So, in Supervised learning theory people are pretty obsesed with proving bounds on the Generalization error, so let's try that.
Check my Introduction to supervised learning theory for a more in-detail description of all the terms and things. But I don't actually mind repeating myself so let's see
We want to model the idea of learning a pattern from a set of examples. What kind of pattern? Well, each of these examples is actually a pair of things. One thing, we call an input, and the other thing we call an output. The pattern we want to learn is a Function, that is a procedure, for predicting an output, when we are given an input.
Examples.
As the last example goes to illustrate, in the general case, we assume that the output could be a (stochastic) probability distribution dependent on the input, rather than be deterministic. But actually, for simplicity, I'm going to assume a deterministic function for now.
Next assumption! We assume the set of examples are sampled from a probability distribution identically and independently, like toin coses are.
We want to predict whether a particular function will predict correctly in new independent samples from the same distribution. We call the probability that function (aka hypothesis) predicts incorrectly, generalization error, with symbol .
Now, I'm going to prove one of the more advanced results in this theory, so sorry, if it's a bit of a jump :P (it will be unless you already are familiar I guess)
We call the set of examples, which makes our training set, .
Trying to get a tight version as per discussion here: https://twitter.com/guillefix/status/1129103523092803584
Now, we call the true function, that determines the outputs given inputs, in the set of examples, the target function. We assume the outputs in this case are just or (aka binary classification)). In this case we sometimes call a funciton, a concept, and they are 1-to-1 with sets (the set corresponding to a function is the set of all inputs mapping to )
Without loss of generality, we assume the target concept to be the empty set, or function with output uniformly . You can think of this as an input-dependent relabelling of the outputs, which doesn't change anything information-theoretically (labels are just labels, I can call them duck/orange or whatever instead of 0/1, and the labelling can depend on the particular input too, as long as it's fixed).
We now assume that the algorithm can only output functions within a certain set of functions, which we call hypothesis class, and denote .
We will denote the fraction of examples in which the algorithm missclassifies (training error) by .
Call {}, property . Let event be {}, that is the event that there exists a function in whose difference between generalization and training error is higher than some constant
If such function doesn't exists, then we can guarantee for our algorihtm (which can only produce funcitons in ) that the generalization error is close (within ) to the training error, so that minimizing the training error approximately minimizes the generalization error. Proving that generror and trainerror are close is the goal of Agnostic learning theory at large.
So the theorem we want to prove is that the probability of such function existing, that is the probability of , is low.
This is not too hard if is a finite set, see Probably approximately correct. You just use the Union bound, after bounding the probability that single function has the bad property (). But we are hardcore, so let's consider the possibility of the set being infinite.
Aha, here's the most clever trick of all supervised learning theory. We can transform an statement about an infinite set into an statement about a finite one as follows.
First, assume for some , i.e. assume property . Then sample an extra samples, and call the new sample . We can show that if the true error is , then it's very unlikely that the error on the new sample, is very different. In particular, using Chernoff bounds we can show that
(see page 348, sec 28.3, in Understanding Machine Learning by Shai and Shai)
Now, Call {}, property .
Then
So
eeh not working. See below.
Hmmm,
let us assume for now that . They say here, that in fact this is true (assuming ) But how to show? Well, is an upper bound on the probability that if the binomial probability is . For , the probability will only decrease as the Binomial distribution only becomes more right-skewed with increasing , so the upper bound still works*. Now, because of the condition , ensuring , we can guarantee , by guaranteeing is smaller than 1/2 or so, because in the actual statement of the theorem we define to be a certain function of , , etc. Alternatively, think of us putting a condition on that it has to be at least .
*we still need to show that, to fully complete the proof.
Call {}, property .
(we imagine first sampling a Set of instances, and then deciding its order to get the tuple (using python tuple sum :P)
NOW. This is the crucial moment. We notice, that property only depends on the labelling that gives on the elements in , so the max over is the same as the max over , the set of labellings on . This is now a finite set! Hurray! We can safely union bound the hecc out of this guy.
using Union bound (which when expressing union as max of indicators, it's the same as Sum is greater than max for nonneg numbers)
Now, is the probability that a random suffle of things in a set of things (where ) gives us a shuffle such that
We can look at the random variable itself. It has mean and is a function of a series of binary variables which indicate whether a particular is in say. These variables can be approximately modelled as being i.i.d. with prob However, they are not really, we are doing a random shuffle of them. However, if they were i.i.d., then Bounded differences inequality would give us (the difference of flipping one of the variables is .
Actually, you can see here that we can convert it into an i.i.d. thing because they look at a different group. Rather than the full permutation group, only at a subgroup of this generated by sawps between the ith and the m+i th element (so between corresponding elements in the two halves of . This is still measure preserving, but one can write the relevant event as dependent on a set of i.i.d. swap-related variables... Neato. Trick apparently comes from here?
then the diff between value and mean () is with prob at least smaller than
so that if
So that under this approximation,
So
where is the Growth function, a bound on
In here they actually prove a tighter version of the Growth function generalization error bound, which allows them to bound a sort of relative version of the generalization gap (divided by the square root of the true error, or by the actual true error, if they assume a bound on it too).
The bound they get is not uniform over training error, it's more like a generalization of the realizable bound, it shows that with high probability, if training error is less than blah (this is fixed before drawing the sample), then the generalization error is less than blah (just like this other blah).
That is why they later do the weighted union bound in here, because they don't really have an agnostic bound. They have a bound that holds can be applied to any training error, but doesn't work uniformly over all training errors. It's a different kind of thing.
Actually people seem to prove this using Rademacher complexity and Massart lemma