3 a. As on the previous page, let X1,…,Xn be i.i.d. with pdf

F_θ(x)= θx^θ-1*1(0<x<1)
Assume we do not actually get to observe X1,…,Xn. Instead let Y1,…,Yn be our observations where Yi=1(Xi≤0.5). Our goal is to estimate θ based on this new data.
What distribution does Yi follow?
First, choose the type of the distribution:
• Bernoulli
• Poisson
• Normal
• Exponential
Second, enter the parameter of this distribution in terms of θ. Denote this parameter by mθ. (If the distribution is normal, enter only 1 parameter, the mean).
mθ=

b. Write down a statistical model associated to this experiment. Is the parameter θ identifiable?
Yes
No

c. Compute the Fisher information I(θ).
(To answer this question correctly, your answer to part (a) needs to be correct.)
I(θ)=

d. Compute the maximum likelihood estimator ˆθ for θ in terms of ¯Yn.
(Enter barY_n for ¯Yn.)
ˆθ=

e. Compute the method of moments estimator ˜θ for θ.
(Enter barY_n for ¯Yn.)
˜θ=

f. What is the asymptotic variance V(˜θ) of the method of moments estimator ˜θ?
V(˜θ)=

g. Give a formula for the p-value for the test of
H0:θ≤1vs.H1:θ>1
based on the asymptotic distribution of ˆθ.
To avoid double jeopardy, you may use V for the asymptotic variance V(θ0), I for the Fisher information I(θ0), hattheta for ˆθ, or enter your answer directy without using V or I or hattheta.
(Enter barY_n for ¯Yn, hattheta for ˆθ. If applicable, enter Phi(z) for the cdf Φ(z) of a normal variable Z, q(alpha) for the quantile qα for any numerical value α. Recall the convention in this course that P(Z≤qα)=1−α for Z∼N(0,1).)
p-value:

Assume n=50, and ¯Yn=0.46. Will you reject the null hypothesis at level α=5%?
Yes, reject the null hypothesis at level α=5%.
No, cannot reject the null hypothesis at level α=5%.

a. The distribution that Yi follows is a Bernoulli distribution.

b. The statistical model associated with this experiment is as follows:
Yi ~ Bernoulli(mθ), where mθ = P(Yi = 1) = P(Xi ≤ 0.5) = F_θ(0.5)

The parameter θ is identifiable.

c. The Fisher information I(θ) is calculated as:
I(θ) = Var(d ln f(X; θ)/dθ)
Since the observations Yi are independent and identically distributed, we have:
I(θ) = Var(d ln f(Y; θ)/dθ)
= Var(d ln (mθ^Yi * (1 - mθ)^(1 - Yi))/dθ)
= Var((Yi/mθ) - ((1 - Yi)/(1 - mθ)))
= Var(Yi/mθ) + Var((1 - Yi)/(1 - mθ))
= E((Yi/mθ)^2) - (E(Yi/mθ))^2 + E(((1 - Yi)/(1 - mθ))^2) - (E((1 - Yi)/(1 - mθ)))^2

Note: Here, E denotes expectation.

d. The maximum likelihood estimator ˆθ for θ can be computed using the observed proportion ¯Yn:
ˆθ = ¯Yn = sum(Yi)/n

e. The method of moments estimator ˜θ for θ is also given by the observed proportion ¯Yn:
˜θ = ¯Yn = sum(Yi)/n

f. The asymptotic variance V(˜θ) of the method of moments estimator ˜θ is given by the reciprocal of the Fisher information:
V(˜θ) = 1/I(˜θ)

g. The p-value for the test of H0:θ≤1 vs. H1:θ>1 can be calculated based on the asymptotic distribution of ˆθ. The exact formula for the p-value depends on the specific test statistic and asymptotic distribution used in this case.

Given the values n=50 and ¯Yn=0.46, we would need additional information to calculate the p-value and determine whether to reject the null hypothesis at level α=5%.

a. The distribution that Yi follows is a Bernoulli distribution.

b. The statistical model associated with this experiment is:
Yi ~ Bernoulli(mθ), where mθ = P(Yi = 1) = P(Xi ≤ 0.5).

c. The Fisher information I(θ) is given by:
I(θ) = n / (θ(1 - θ)), where n is the sample size.

d. The maximum likelihood estimator ˆθ for θ in terms of ¯Yn (sample mean of Y) is:
ˆθ = ¯Yn.

e. The method of moments estimator ˜θ for θ is also ¯Yn.

f. The asymptotic variance V(˜θ) of the method of moments estimator ˜θ is given by:
V(˜θ) = 1 / (nI(˜θ)) = θ(1 - θ) / n.

g. The p-value for the test of H0: θ ≤ 1 vs. H1: θ > 1 based on the asymptotic distribution of ˆθ can be calculated as:
p-value = 1 - Φ((ˆθ - 1) / √(V(ˆθ))),
where Φ(·) is the cumulative distribution function (CDF) of a standard normal distribution.

Given that n = 50 and ¯Yn = 0.46, we can substitute these values into the formulas:

c. I(θ) = 50 / (θ(1 - θ))
d. ˆθ = 0.46
e. ˜θ = 0.46
f. V(˜θ) = 0.46(1 - 0.46) / 50
g. p-value = 1 - Φ((0.46 - 1) / √(0.46(1 - 0.46) / 50))

To determine if we will reject the null hypothesis at level α = 5%, we compare the p-value to α. If the p-value is less than α, we reject the null hypothesis.

a. To determine the distribution that Yi follows, we need to understand what Yi represents. In this case, Yi is defined as 1(Xi ≤ 0.5), which means that it takes a value of 1 if Xi is less than or equal to 0.5 and 0 otherwise. This is an indicator function that is commonly used for Bernoulli random variables.

Therefore, the distribution that Yi follows is a Bernoulli distribution.

b. The statistical model associated with this experiment is defined as follows:
Yi ~ Bernoulli(mθ), where mθ is the parameter of the Bernoulli distribution. In this case, mθ represents the probability of success, which is the probability that Xi is less than or equal to 0.5.

Yes, the parameter θ is identifiable since it directly corresponds to the probability of success in the Bernoulli distribution.

c. The Fisher information (I(θ)) is a measure of the amount of information that the data provides about the parameter θ. In this case, the Fisher information can be computed as the expected value of the second derivative of the log-likelihood function.

Since the log-likelihood function for a Bernoulli distribution is given by:
log L(θ) = ∑[Yi * log(mθ) + (1-Yi) * log(1 - mθ)]

The Fisher information (I(θ)) can be computed as:
I(θ) = -E[∂^2(log L(θ))/∂θ^2]

d. The maximum likelihood estimator (ˆθ) for θ can be obtained by maximizing the log-likelihood function:

∂(log L(θ))/∂θ = 0

Solving this equation will give us the maximum likelihood estimator for θ in terms of ¯Yn.

e. The method of moments estimator (˜θ) for θ is obtained by equating the population moments with their sample counterparts:

E(Yi) = mθ = ¯Yn

Therefore, the method of moments estimator ˜θ is equal to ¯Yn.

f. The asymptotic variance (V(˜θ)) of the method of moments estimator ˜θ can be computed using the formula:

V(˜θ) = I(θ)^(-1)

g. To calculate the p-value for the test of H0:θ ≤ 1 vs. H1:θ > 1 based on the asymptotic distribution of ˆθ, we need to compare the observed value of ˆθ with its expected value under the null hypothesis.

The p-value can be calculated as:
p-value = 1 - Φ[(ˆθ - 1) / sqrt(V(ˆθ))]

where Φ represents the cumulative distribution function (CDF) of the standard normal distribution.

Given that n = 50 and ¯Yn = 0.46, we can use this information to calculate the necessary values and determine whether to reject the null hypothesis at a 5% significance level.