The lifetime (in thousands of hours) X of a light bulb has pdf

g(x)= \lambda e^{-\lambda x}, \hspace{3mm} x\geq 0

for some unknown \lambda >0.

We collect {\color{blue}{n=33}} independent lightbulbs at random and record their lifetime X_1,\ldots ,X_ n, which are all independent copies of X. We find that {\color{blue}{\overline{X_ n}=42.6}} thousand hours.

We now want to test

\displaystyle \displaystyle H_0\, :\, \lambda =0.03 \displaystyle \text {vs} \displaystyle H_1\, :\, \lambda \neq 0.03
at (significance) level \alpha =5\%.

Write down the test statistic T_{n,\text {LR}} for the likelihood ratio test, in terms of \hat{\lambda }^{\text {MLE}}, n.

(Enter hatlambda for \hat{\lambda }^{\text {MLE}} and S_n for S_ n=\sum _{i=1}^ n X_ i.)

T_{n}^{\text {LR}}=\quad
unanswered
What is p-value p^{\text {LR}} of the likelihood ratio test?

(Enter an answer accurate to at least 3 decimal places.)

p^{\text {LR}}=\quad

What is the conclusion of the Likelihood Ratio test?

(Read the choices carefully, especially the subscripts.)

Reject H_0

Do not reject H_0.

Conclude H_0 is true.

Reject H_1

Do not reject H_1.

Conclude H_1 is true.

The likelihood ratio test statistic is given by:

T_{n}^{\text {LR}} = -2\ln\left(\frac{L(\lambda_0)}{L(\hat{\lambda}^{\text{MLE}})}\right)

where \lambda_0 is the null value of \lambda (0.03 in this case), L(\lambda) is the likelihood function, and \hat{\lambda}^{\text{MLE}} is the maximum likelihood estimator of \lambda.

In this case, the likelihood function is given by:

L(\lambda) = \prod_{i=1}^{n} \lambda e^{-\lambda x_i} = \lambda^n e^{-\lambda\sum_{i=1}^{n}x_i}

Given that \overline{X_n} = 42.6 thousand hours, we can estimate \lambda using the maximum likelihood estimator:

\hat{\lambda}^{\text{MLE}} = \frac{1}{\overline{X_n}}

Substituting the values, the test statistic is:

T_{n}^{\text{LR}} = -2\ln\left(\frac{L(0.03)}{L(\hat{\lambda}^{\text{MLE}})}\right) = -2\ln\left(\frac{(0.03)^n e^{-0.03\sum_{i=1}^{n}x_i}}{(\frac{1}{\overline{X_n}})^n e^{-(\frac{1}{\overline{X_n}})\sum_{i=1}^{n}x_i}}\right) = 2n\ln\left(\frac{1}{0.03\overline{X_n}}\right)

The p-value of the likelihood ratio test is the probability, assuming the null hypothesis is true, of obtaining a test statistic as or more extreme than the observed test statistic. It is given by:

p^{\text{LR}} = P(T_{n}^{\text{LR}} \geq k | H_0)

where k is the observed test statistic.

We do not have the observed test statistic, k, so we cannot calculate the p-value or make a conclusion about the test.

To calculate the test statistic T_{n,LR}, we need to first calculate the maximum likelihood estimator for \lambda, denoted as \hat{\lambda}^{MLE}.

The maximum likelihood estimator for \lambda is given by: \hat{\lambda}^{MLE} = \frac{1}{\overline{X_n}}.

Given that \overline{X_n} = 42.6, we have:

\hat{\lambda}^{MLE} = \frac{1}{42.6}

Next, we calculate the test statistic:

T_{n,LR} = -2\log\left(\frac{L(H_0)}{L(H_1)}\right)

where L(H_0) is the maximum likelihood under the null hypothesis H_0 and L(H_1) is the maximum likelihood under the alternative hypothesis H_1.

Under the null hypothesis H_0, the maximum likelihood \hat{\lambda}^{MLE}_0 is given by \hat{\lambda}^{MLE}_0 = 0.03.

So, the likelihood under H_0 is:

L(H_0) = \prod_{i=1}^n g(x_i; \lambda = 0.03)

where g(x; \lambda) is the given pdf.

Similarly, under the alternative hypothesis H_1, the likelihood is:

L(H_1) = \prod_{i=1}^n g(x_i; \lambda = \hat{\lambda}^{MLE})

Now, using the calculated values of \hat{\lambda}^{MLE}, \hat{\lambda}^{MLE}_0 and the given values of n and \overline{X_n}, we can calculate the test statistic T_{n,LR}.

Once we have the value of T_{n,LR}, we can calculate the p-value of the likelihood ratio test using the chi-square distribution with 1 degree of freedom.

The p-value is given by:

p^{LR} = P(\chi^2 > T_{n,LR})

where \chi^2 is the chi-square distribution.

To determine the conclusion of the likelihood ratio test, we compare the calculated p-value with the significance level \alpha = 0.05. If the p-value is less than \alpha, we reject the null hypothesis H_0. Otherwise, we do not reject H_0.