aka statistical estimation
An Estimator (aka point estimate) for a Parameter is a Random variable that may be calculated from the sampled data (i.e. a Statistic), chosen so that it is "close to" the real parameter (in expectation, or with high probability, etc.), i.e. it helps estimate the value of the unknown parameter.
An unbiased estimator is an estimator whose Expected value is equal to the real parameter.
https://www.wikiwand.com/en/Unbiased_estimation_of_standard_deviation
Mean: sample mean
Variance: sample variance. see here
Often, a good estimator is unbiased, and has the smallest uncertainty we can. See Minimum variance unbiased estimator
Use Central limit theorem. Mean will follow approximately a Normal distribution with mean equal to the real mean, and variance equal to the real Standard deviation/, where is the number of samples. We can use the estimates of these quantities to find Confidence intervals for the real mean.
The real mean: , where is the sample mean. is a Random variable which is distributed according to the Standard normal distribution (mean and variance ).
In this formulation (frequentists), a 95% confidence interfval means that 95% of the times we draw a sample of this type, 95% of the time this confidence interval will include the mean.
Uncertain knowledge + knowledge about the uncertainty = useful knowledge
Box model
Good measure of how good an estimator is
MSE = = Variance() + BIAS
Can be used to give High-probability guarantees on the estimator (like bounding how far the true value is from the estimate, with high probability, say with 95% confidence), but they may also be found on their own, without being tied to a point estimator.
Consistency. An estimator is consistent if it tends to real value as the sample size goes to infinity.
CLT. Asymptotically normally distributed
Asymptotic efficiency. I think this means it is asymptotically the Minimum variance unbiased estimator.
Ancillarity. An ancillary statistic is a measure of a sample whose distribution does not depend on the parameters of the model. (see wiki)
Good formalism
same as MAP, when using a Uniform prior.
Variance can be computed asymptotically. Covariance matrix given by Fisher information matrix