Statistical independence

cosmos 10th April 2019 at 10:58am

Given a Probability space, two events A,BFA,B \in \mathbb{F} are called independent if P(Ab)=P(A)P(B)P(A\cap b) = P(A)P(B)

For random variables, they are called independent if for all AXA \subset \mathcal{X}, BYB \subset \mathcal{Y}, {XA}\{X \in A\} and {YB}\{Y \in B\} are independent in the above sense.

Equivalently, E[f(X),g(Y)]=E[f(X)]E[g(Y)]E[f(X),g(Y)] = E[f(X)]E[g(Y)] for all functions f,gf,g

See here: Chapter 2 Information Measures - Section 2.1 A Independence and Markov Chainsindependence in graphical models

Independence of two random variables

Mutual independence

Pairwise independence

Conditional independence

https://en.wikipedia.org/wiki/Conditional_independence

See here. Note that his definition is the same as in wiki. Just divide by p(y)p(y) to see this. His example at the end is rather illustrative too.