Statistical estimation

cosmos 3rd April 2019 at 1:38am

aka statistical estimation

An Estimator (aka point estimate) for a Parameter is a Random variable that may be calculated from the sampled data (i.e. a Statistic), chosen so that it is "close to" the real parameter (in expectation, or with high probability, etc.), i.e. it helps estimate the value of the unknown parameter.

Unbiased estimator

An unbiased estimator is an estimator whose Expected value is equal to the real parameter.

https://www.wikiwand.com/en/Unbiased_estimation_of_standard_deviation

Mean: sample mean

Variance: nn1×\frac{n}{n-1} \times sample variance. see here

Uncertainty in estimator

Often, a good estimator is unbiased, and has the smallest uncertainty we can. See Minimum variance unbiased estimator

Uncertainty of mean estimator

Use Central limit theorem. Mean will follow approximately a Normal distribution with mean equal to the real mean, and variance equal to the real Standard deviation/N\sqrt{N}, where NN is the number of samples. We can use the estimates of these quantities to find Confidence intervals for the real mean.

The real mean: μ=X¯+Z×σX¯\mu = \bar{X} + Z \times \sigma_{\bar{X}}, where X¯\bar{X} is the sample mean. ZZ is a Random variable which is distributed according to the Standard normal distribution (mean 00 and variance 11).

In this formulation (frequentists), a 95% confidence interfval means that 95% of the times we draw a sample of this type, 95% of the time this confidence interval will include the mean.

Uncertain knowledge + knowledge about the uncertainty = useful knowledge

Box model

Mean squared error

Good measure of how good an estimator is

MSE = E[(μ^μ)2]=E[μ^E[μ^]]+E[(E[μ^μ)2]E[(\hat{\mu}-\mu)^2] = E[\hat{\mu} -E[\hat{\mu}]] + E[(E[\hat{\mu}-\mu)^2] = Variance(μ^\hat{\mu}) + BIAS2^2

Confidence intervals

Can be used to give High-probability guarantees on the estimator (like bounding how far the true value is from the estimate, with high probability, say with 95% confidence), but they may also be found on their own, without being tied to a point estimator.

Asymptotic properties

Consistency. An estimator is consistent if it tends to real value as the sample size goes to infinity.

CLT. Asymptotically normally distributed

Asymptotic efficiency. I think this means it is asymptotically the Minimum variance unbiased estimator.

Ancillarity. An ancillary statistic is a measure of a sample whose distribution does not depend on the parameters of the model. (see wiki)

Types of point estimators

  • Minimum variance unbiased estimator
  • Equivariant estimator. See page 412 (section 8.9) in book Rohatgi, Saleh, An introduction to probability and statistics).
    • minimum risk equivariant estimator

Bayesian inference for statistical estimation

Good formalism

Maximum a posteriori (MAP) estimator

Maximum likelihood estimator (MLE)

same as MAP, when using a Uniform prior.

  • Consistent
  • Asymptotically normally distributed
  • Asymptotically efficient, given by Cramer-Rao bound

Uncertainty of maximum likelihood estimator

Variance can be computed asymptotically. Covariance matrix given by Fisher information matrix

Cramer-Rao bound

Credible intervalls

Point estimation