We estimate the unknown mean θ of a random variable X (where X has a finite and positive variance) by forming the sample mean Mn=(X1+⋯+Xn)/n of n i.i.d. samples Xi and then forming the estimator

Θˆ=Mn+1n.

Is this estimator unbiased?

unanswered
Is this estimator consistent?

unanswered
Consider now a different estimator, Θˆn=X1 , which ignores all but the first measurement.

Is this estimator unbiased?

unanswered
Is this estimator consistent?

To determine whether the estimator Θ^ = Mn + 1/n is unbiased, we need to calculate its expected value and see if it equals the true value of θ.

The expected value of Θ^ can be found as follows:
E[Θ^] = E[Mn + 1/n]

Since E is a linear operator, we can split this expectation into two parts:
E[Θ^] = E[Mn] + E[1/n]

The expected value of Mn is easy to calculate, as the sample mean is an unbiased estimator of the population mean:
E[Mn] = θ

Now, let's determine the expected value of 1/n:
E[1/n] = 1/n

Combining these results, we have:
E[Θ^] = E[Mn] + E[1/n] = θ + 1/n

Since the expected value of Θ^ is θ + 1/n, the estimator is not unbiased because it introduces a bias of 1/n.

To determine whether the estimator Θ^n = X1 is unbiased, we can follow the same process. The expected value of Θ^n is given by:

E[Θ^n] = E[X1] = θ

Therefore, the estimator Θ^n is unbiased as its expected value equals the true value of θ.

Regarding consistency, we need to consider the behavior of the estimators as the sample size (n) increases.

For the estimator Θ^ = Mn + 1/n, as n tends to infinity, the term 1/n approaches 0, and the estimator converges to the true value of θ. Thus, the estimator is consistent.

On the other hand, for the estimator Θ^n = X1, it does not depend on the sample size. Whether n is small or large, the estimator only considers the first measurement. Therefore, it does not converge to the true value of θ as n tends to infinity, and it is not consistent.