Argue that the proposed estimators λˆ and λ˜ below are both consistent and asymptotically normal. Then, give their asymptotic variances V(λˆ) and V(λ˜), and decide if one of them is always bigger than the other.

Let X1,…,Xn∼i.i.d. Poiss(λ), for some λ>0. Let λ^=X¯¯¯¯n and λ~=−ln(Y¯¯¯¯n), where Yi=1{Xi=0},i=1,…,n.

V(λˆ)=

V(λ˜)

To argue that the estimators λˆ and λ˜ are both consistent and asymptotically normal, we need to show that they converge to the true parameter λ as the sample size increases, and that their sampling distributions approach a normal distribution.

First, let's examine the estimator λˆ = X̄, where X̄ is the sample mean. To show consistency, we need to demonstrate that λˆ converges in probability to λ.

By the law of large numbers, X̄ converges in probability to the population mean E(X), which in this case is equal to λ. Hence, λˆ is consistent since it converges to the true parameter with increasing sample size.

Next, let's consider the estimator λ˜ = -ln(Ȳ), where Ȳ is the sample mean of the indicator variable Yᵢ = 1{Xᵢ = 0}. To show consistency, we need to demonstrate that λ˜ converges in probability to λ.

Since Yi is equal to 1 when Xi is 0 and 0 otherwise, Ȳ represents the proportion of zeros in the sample. As n approaches infinity, the proportion of zeros converges to the probability of the Poisson variable taking the value 0, which is e^(-λ).

Taking the natural logarithm of e^(-λ), we obtain -λ. Therefore, λ˜ is an unbiased estimator of -λ.

To show that λ˜ converges in probability to -λ, we can use the continuous mapping theorem since the logarithm function is continuous. Since Ȳ converges in probability to e^(-λ), the logarithm of Ȳ converges in probability to the logarithm of e^(-λ), which is -λ.

Both estimators λˆ and λ˜ are consistent as they converge to λ in probability as the sample size increases.

To determine the asymptotic variances V(λˆ) and V(λ˜), we can use their respective definitions:

V(λˆ) = Var(X̄) / n,

where Var(X̄) is the variance of the sample mean. Since each Xi is independently and identically distributed with a Poisson distribution with parameter λ, the variance of Xi is also λ. By the properties of the sample mean, Var(X̄) = λ / n.

Therefore, V(λˆ) = λ / n.

For V(λ˜), we need to calculate the variance of -ln(Ȳ), which can be obtained as follows:

V(λ˜) = Var(-ln(Ȳ)) / n.

Since Ȳ is the proportion of zeros in the sample, it follows a binomial distribution with parameters n and e^(-λ). The variance of a binomial distribution with parameters n and p is given by np(1-p).

Therefore, Var(Ȳ) = n * e^(-λ) * (1 - e^(-λ)), and Var(-ln(Ȳ)) = Var(Ȳ) / (e^(-λ))^2.

Simplifying, we have Var(-ln(Ȳ)) = n(1 - e^(-λ))^2.

Hence, V(λ˜) = n(1 - e^(-λ))^2 / n = (1 - e^(-λ))^2.

To compare the asymptotic variances, we can analyze their behavior. Note that both V(λˆ) and V(λ˜) decrease as the sample size n increases. However, there is no general rule stating that one of them is always bigger than the other. It depends on the specific value of λ and the properties of the Poisson distribution.

To determine which one has a larger variance for a given value of λ, you can calculate the explicit values of V(λˆ) and V(λ˜) using the formulas derived above and compare them.