Borel-Cantelli lemmas

cosmos 10th April 2019 at 10:58am

https://www.wikiwand.com/en/Borel%E2%80%93Cantelli_lemma#/Converse_result

Proof for 2nd Borel-Cantelli lemma (used for Recurrence): https://www.wikiwand.com/en/Borel%E2%80%93Cantelli_lemma#/Proof%5B5%5D what it is saying is "what is the probability that after some finite time NN, I don't get any more of the events in i=NEi\cup_{i=N}^\infty E_i" (which is just the event {I come back at the origin} in the case of studying recurrence). And that can be expressed as a product of the complement of probabilities, when the events at different times are independent!, which by bounding it with an exponential, you can see it's zero if the sum over the probability of the event, over all time is infinity. One then can show that the union of the sets {sequences such that the event doesn't occur any more after finite time NN} for increasing NNs (which are nested!) is the limit as NN gets to infinity of their probability. But we shown that to be 00 therefore this limit is 00. (actually we showed that to be zero by using exp()=0\exp{(-\infty)} = 0, which is really a limit thing, and so we are taking two limits here... Hmm. But the limits in the proof are taken in the right order so that it works out.. This all amounts to how liWe sWeminfs and limsups work in measure theory :P Well also the other limit (that isn't NN) is like the time horizon, and it makes sense to take that limit first. But it makes intuitive sense at least. If the probability that it won't happen again after time NN is 00 for any NN, then it will happen infinitely often with prob 1...

If however, we asked the question what is the probability that event EE doesn't happen after time NN and before time MM, we get a finite amount, and the limit of this amount depends on how NN and MM tend to infinity.

We are taking MM to infinity first so that we can apply the argument that the intersections of EicE^c_i are nested.. Yeah yeah I get that, but the above also makes sense.

When we say "happens infinitely often" we are making an statement about an even twhere the time horizon MM has already been taken to infinity. Therefore that limit is taken first before any limit regarding the definition of the event!

Yes, for any finite MM, "happens infinitely often" isnt' even de defined! So we write down the definition of the event, whose probability may include some limit w.r.t to some variable (like NN in this case), but the probabilities of any basic event is computed by having taken the limit of MM to infinity already! That is our probability space!

Ok, that makes sense. However, the infinite union becoming a limit of probabilities still requires some thought!

Something happens infinitely often if the probability of {the complement of that something not happening after a finite time NN} is zero, no matter how large NN*. That "no matter how large NN" is what the limit on NN is capturing. But in fact, we have shown it explicitely that it is true, no matter how large NN is, so we have shown it explicitly as the definition I just made in the first sentence of this paragraph. Therefore, the formal axioms of probabiliy theory, which the proof follows, give the same result as the formulation I just made here (which isn't made of complements of limsups etc.

,* this can be justified as: if it happens a finite number of times, there is a finite NN after which it doesn't happen.

Note that we formally define an infinite sum or an infinite product as the limit of that sum or of that product as we take the number of terms or factors going to infinity.


Btw the notionf of limsup used in the proof can be read as {"event happens again after time 11" and "event happens again after time 22" and "event happens again after time 33" and ...}, which agrees with my definition above. The negation of this is that there is some time NN after which event doesn't happen. The probability of each of these is exactly 00, and therefore the infinite union over NN is 00. In fact because the events are nested, we know that the infinite union over them is just the limit of the one for NN as NN tends to infinity (i.e. the probability of the largest one if the union was finite. Because it is infinite, we consider the limit. In fact in our case, all elements are 0 in the sequence of unions...). Ok so the definitions make sense, and they agree with intuition.

In practice, however, where things are finite, what we need to consider is taking the limit of MM as far possible, before taking the limit of NN. It makes, sense however, that we would have these difficulties, because this theorem deals with infinity at the core of its statement. Translating the limits implied by the formal definition of the things in its statement, it means in practice to take MM as big as possible before considering NN, and in fact N<<MN<<M (<– this corresponds to the practical approximation of taking limit of MM before taking limit of NN*) would be the "practical/computable" version of this idealized theorem.

, * This should be read as "M arbitrarily larger than N", not the usual asymptotic meaning of the symbol <<<<, which here isn't defined. This seems like a good general recipe for translating order of limits to finitist statements. One could maybe prove this intuition in general too! And prove approximation results to limit-based things, but I think it's likely to be quite generally true! Yeah, a single limit infinite corresponds to {thing is arbitrarily large}, an ordered sequence of limits corresponds to things like {thing2 is arbitrarily larger than thing 1} etc! See Limits and infinity

To make a finitist version of this statement, one needs to say basically: "the probability that something doesn't happen in a really long time", rather than "the probability that something doesn't happen infinitely often"
So "the probability that something doesn't happen in a really long time is very small if the sum of probabilities of the something (assuming independent) over that really long time is really large ", and if the longer that that really long time is made, the the sum of probabilities gets arbitrarily large, then the probability of it not happening gets arbitrarily small