2

Let X1,…,Xn be i.i.d. random variable with pdf fθ defined as follows:
fθ(x)=θxθ−11(0≤x≤1)
where θ is some positive number.
(a)

Is the parameter θ identifiable?
Yes
No

(b)

Compute the maximum likelihood estimator ˆθ of θ.
(Enter Sigma_i(g(X_i)) for the sum ∑ni=1g(Xi), e.g. enter Sigma_i(X_i^2) for ∑ni=1X2i, enter Sigma_i(ln(X_i)) for ∑ni=1ln(Xi). Do not forget any necessary n in your answer, e.g. ˉXn will need to be entered as Sigma_i(X_i)/n . Do not worry about the parser not rendering correctly, as the grader will still work independently. If you would like proper rendering, enclose Σi(g(Xi)) in parentheses i.e. use (Σi(g(Xi))).)
Maximum likelihood estimator ˆθ=

c)
As in previous ecercise let X1…Xn be iid with pdf where θ>0
ocultos)
Compute the Fisher information.
I(θ)=

d)
What kind of distribution does the distribution of √nˆθ approach as n grows large?
Bernoulli
Poisson
Normal
Exponential

(e)

What is the asymptotic variance V(ˆθ) of ˆθ ?
To avoid double jeopardy, you may use I for the Fisher information I(θ) evaluated at θ, or you may enter your answer without using I.
V(ˆθ)

f)

Using the MLE ˆθ, find the shortest confidence interval for θ with asymptotic level 85% using the plug-in method.
To avoid double jeopardy, you may use V for the appropriate estimator of the asymptotic variance V(ˆθ), and/or I for the Fisher information I(ˆθ) evaluated at ˆθ, or you may enter your answer without using V or I.
(Enter hattheta for ˆθ. If applicable, enter Phi(z) for the cdf Φ(z) of a normal variable Z, q(alpha) for the quantile qα for any numerical value α. Recall the convention in this course that P(Z≤qα)=1−α for Z∼N(0,1).)
Iplug-in=[A,B] where
A=
sin responder
B=
[3:58 p. m., 10/7/2021] Ejh: 3. As on the previous page, let X1,…,Xn be i.i.d. with pdf
F_θ(x)= θx^θ-1*1(0<x<1)
Assume we do not actually get to observe X1,…,Xn. Instead let Y1,…,Yn be our observations where Yi=1(Xi≤0.5). Our goal is to estimate θ based on this new data.
What distribution does Yi follow?
First, choose the type of the distribution:
• Bernoulli
• Poisson
• Normal
• Exponential
Second, enter the parameter of this distribution in terms of θ. Denote this parameter by mθ. (If the distribution is normal, enter only 1 parameter, the mean).
mθ=

(a) To determine if the parameter θ is identifiable, we need to check if different values of θ result in different probability density functions (pdfs). In this case, the pdf is given by fθ(x) = θx^(θ-1) * 1(0≤x≤1), where θ is some positive number. Notice that θ appears in the exponent as well as in the overall coefficient of the pdf. Therefore, different values of θ will result in different pdfs, meaning θ is identifiable. The answer is Yes.

(b) To compute the maximum likelihood estimator (MLE) ˆθ of θ, we need to find the value of θ that maximizes the likelihood function. In this case, the likelihood function is L(θ) = ∏(i=1 to n) fθ(xi), where xi is the observed value of the random variable Xi. Taking the logarithm of the likelihood function, we get log(L(θ)) = ∑(i=1 to n) log(fθ(xi)). To maximize this function, we can differentiate it with respect to θ and set the derivative equal to zero.

Differentiating log(fθ(x)) with respect to θ, we get d/dθ [log(fθ(x))] = d/dθ [log(θx^(θ-1))] = (1/θ - log(x)).

Setting this derivative to zero, we get (1/θ - log(x)) = 0, which implies θ = 1/log(x).

Therefore, the MLE ˆθ of θ is given by ˆθ = 1/log(x), where log(x) is the natural logarithm of x.

(c) To compute the Fisher information, we need to calculate the expected value of the squared derivative of log(fθ(x)) with respect to θ.

First, let's find the derivative of log(fθ(x)).

d/dθ [log(fθ(x))] = d/dθ [log(θx^(θ-1))] = (1/θ - log(x)).

Squaring this derivative, we get [(1/θ - log(x))^2].

Next, we need to calculate the expected value of this squared derivative.

Let's denote the expected value as E(⋅).

I(θ) = E[(1/θ - log(x))^2]

To calculate this expectation, we need the distribution of X. However, the distribution of X is not given in the question. Hence, we are unable to compute the Fisher information without knowing the distribution of X.

(d) As n grows large, the distribution of √nˆθ approaches the normal distribution. Therefore, the answer is Normal.

(e) The asymptotic variance V(ˆθ) of ˆθ can be calculated using the inverse of the Fisher information, which is denoted as I(θ)^(-1). However, since we don't have the Fisher information I(θ), we cannot compute the asymptotic variance V(ˆθ) without it.

(f) To find the shortest confidence interval for θ with an asymptotic level of 85% using the plug-in method, we need to estimate the asymptotic variance V(ˆθ) and the value of θ that corresponds to an 85% confidence level.

Unfortunately, we don't have the necessary information to calculate V(ˆθ). Without the Fisher information or additional details, we cannot compute the asymptotic variance or construct the confidence interval using the plug-in method.

(g) For the additional exercise you provided, where Yi=1(Xi≤0.5), the random variable Yi follows a Bernoulli distribution.

The parameter of this distribution in terms of θ is mθ = P(Yi = 1) = P(Xi ≤ 0.5).

Since the random variable Xi follows the pdf fθ(x) = θx^(θ-1) * 1(0≤x≤1), we can substitute x = 0.5 into the pdf to find P(Xi ≤ 0.5).

P(Xi ≤ 0.5) = ∫[0 to 0.5] (θx^(θ-1)) dx

Integrating, we get P(Xi ≤ 0.5) = ∫[0 to 0.5] (θx^(θ-1)) dx = (0.5^θ) - 0 = 0.5^θ.

Therefore, mθ = P(Yi = 1) = 0.5^θ.