Argue that the proposed estimators πœ†Λ† and πœ†Λœ below are both consistent and asymptotically normal. Then, give their asymptotic variances 𝑉(πœ†Λ†) and 𝑉(πœ†Λœ) , and decide if one of them is always bigger than the other.

Let 𝑋1,…,π‘‹π‘›βˆΌπ‘–.𝑖.𝑑.π–―π—ˆπ—‚π—Œπ—Œ(πœ†) , for some πœ†>0 . Let πœ†Μ‚ =π‘‹βŽ―βŽ―βŽ―βŽ―βŽ―π‘› and πœ†Μƒ =βˆ’ln(π‘ŒβŽ―βŽ―βŽ―βŽ―π‘›) , where π‘Œπ‘–=1{𝑋𝑖=0},𝑖=1,…,𝑛 .

𝑉(πœ†Λ†) =? and 𝑉(πœ†Λœ) =?

To argue that the proposed estimators πœ†Μ‚ and πœ†Μƒ are both consistent and asymptotically normal, we need to show that they satisfy the properties of consistency and asymptotic normality.

1. Consistency:
An estimator is consistent if it converges to the true parameter value as the sample size increases. In this case, we want to show that both πœ†Μ‚ and πœ†Μƒ converge to πœ† as n approaches infinity.

a. πœ†Μ‚ = π‘‹βŽ―βŽ―βŽ―βŽ―βŽ―π‘›:
The sample mean π‘‹βŽ―βŽ―βŽ―βŽ―βŽ―π‘› is a consistent estimator for the population mean due to the Law of Large Numbers. For an i.i.d. sequence of random variables 𝑋1, 𝑋2, ..., 𝑋𝑛 with a common distribution π–―π—ˆπ—‚π—Œπ—Œ(πœ†), the sample mean π‘‹βŽ―βŽ―βŽ―βŽ―βŽ―π‘› converges to πœ† as n approaches infinity. Therefore, πœ†Μ‚ = π‘‹βŽ―βŽ―βŽ―βŽ―βŽ―π‘› is a consistent estimator of πœ†.

b. πœ†Μƒ = βˆ’ln(π‘ŒβŽ―βŽ―βŽ―βŽ―π‘›):
In this case, π‘Œπ‘– = 1{𝑋𝑖 = 0} is an indicator variable that takes the value 1 if 𝑋𝑖 = 0, and 0 otherwise. So, π‘ŒβŽ―βŽ―βŽ―βŽ―π‘› represents the proportion of zeros in the sample. As n approaches infinity, the proportion of zeros in the sample will converge to the probability of getting a zero from the distribution π–―π—ˆπ—‚π—Œπ—Œ(πœ†). Taking the negative logarithm of this proportion, βˆ’ln(π‘ŒβŽ―βŽ―βŽ―βŽ―π‘›), will tend to πœ†. Thus, πœ†Μƒ = βˆ’ln(π‘ŒβŽ―βŽ―βŽ―βŽ―π‘›) is a consistent estimator of πœ†.

2. Asymptotic Normality:
An estimator is asymptotically normal if, as the sample size increases, its distribution becomes approximately normal with mean πœ† and a finite variance. To determine the asymptotic variances 𝑉(πœ†Μ‚) and 𝑉(πœ†Μƒ), we need to calculate the variances of the estimators.

a. 𝑉(πœ†Μ‚):
For the sample mean estimator πœ†Μ‚ = π‘‹βŽ―βŽ―βŽ―βŽ―βŽ―π‘›, the variance is given by Var(π‘‹βŽ―βŽ―βŽ―βŽ―βŽ―π‘›) = Var(𝑋𝑖)/𝑛, where Var(𝑋𝑖) = πœ†(1 - πœ†) for 𝑖.𝑖.𝑑. random variables from the π–―π—ˆπ—‚π—Œπ—Œ distribution. Thus, 𝑉(πœ†Μ‚) = πœ†(1 - πœ†)/𝑛.

b. 𝑉(πœ†Μƒ):
To find the variance of πœ†Μƒ = βˆ’ln(π‘ŒβŽ―βŽ―βŽ―βŽ―π‘›), we need to calculate the variance of π‘ŒβŽ―βŽ―βŽ―βŽ―π‘›. Since π‘ŒβŽ―βŽ―βŽ―βŽ―π‘› represents the proportion of zeros in the sample, it follows a binomial distribution with parameters 𝑛 and πœ†. The variance of a binomial distribution with parameters 𝑛 and πœ† is given by Var(π‘ŒβŽ―βŽ―βŽ―βŽ―π‘›) = π‘›πœ†(1 - πœ†). Taking the negative logarithm does not change the variance, so 𝑉(πœ†Μƒ) = π‘›πœ†(1 - πœ†).

Comparing the two asymptotic variances, we can see that 𝑉(πœ†Μ‚) = πœ†(1 - πœ†)/𝑛, while 𝑉(πœ†Μƒ) = π‘›πœ†(1 - πœ†). The ratio of the two variances is 𝑉(πœ†Μƒ)/𝑉(πœ†Μ‚) = (π‘›πœ†(1 - πœ†))/ (πœ†(1 - πœ†)/𝑛) = 𝑛². Thus, 𝑉(πœ†Μƒ) is always greater than 𝑉(πœ†Μ‚), with 𝑉(πœ†Μƒ) growing quadratically with n.

In conclusion, both estimators πœ†Μ‚ and πœ†Μƒ are consistent and asymptotically normal. However, the asymptotic variance 𝑉(πœ†Μƒ) is always larger than the asymptotic variance 𝑉(πœ†Μ‚).