As above, under the statistical model ( \{ 1,2,3 \} , \{ \mathbf{P}_{\mathbf{p}} \} _{\mathbf{p} \in \Delta _3}), we have

L_{12}(\mathbf{x}, \mathbf{p}) = p_1^ A p_2^ B p_3^ C

where

\mathbf{x} = 1, 3, 1, 2, 2, 2, 1, 1, 3, 1, 1, 2.

In the previous problem, you found the specific values for A, B, and C.

Recall that the MLE is given by

\widehat{\mathbf{p}}^{MLE}_ n = \text {argmax}_{\mathbf{p} \in \Delta _3} \log L_ n(X_1, \ldots , X_ n, \mathbf{p}).

By the theory of Lagrange multipliers, one can show that the maximum occurs at the point \mathbf{p} such that there exists \lambda \neq 0 so that

\nabla \log L_ n (X_1, \ldots , X_ n, \mathbf{p}) = \lambda \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}.

(The gradient above is taken with respect to the parameter \mathbf{p}.)

Using this result and the previous problem, what is the estimate \widehat{\mathbf{p}}^{MLE}_{12} for \mathbf{p} given the data set \mathbf{x}?

(\widehat{\mathbf{p}}^{MLE}_{12})_1 =
unanswered (\widehat{\mathbf{p}}^{MLE}_{12})_2 =
unanswered (\widehat{\mathbf{p}}^{MLE}_{12})_3 =

To find the estimate \(\hat{p}^{MLE}_{12}\) for \(\mathbf{p}\) given the data set \(\mathbf{x}\), we need to maximize the log-likelihood function \(\log L_{12}(X_1, \ldots, X_{12}, \mathbf{p})\) with respect to \(\mathbf{p}\). The log-likelihood function is given by:

\[
\log L_{12}(\mathbf{x}, \mathbf{p}) = A \log(p_1) + B \log(p_2) + C \log(p_3)
\]

We already found the values for \(A\), \(B\), and \(C\) in the previous problem. Now, we will differentiate the log-likelihood function with respect to \(p_1\), \(p_2\), and \(p_3\) and set the derivatives equal to \(\lambda\):

\[
\begin{aligned}
\frac{\partial \log L_{12}}{\partial p_1} &= \frac{A}{p_1} = \lambda \\
\frac{\partial \log L_{12}}{\partial p_2} &= \frac{B}{p_2} = \lambda \\
\frac{\partial \log L_{12}}{\partial p_3} &= \frac{C}{p_3} = \lambda \\
\end{aligned}
\]

From these equations, we can see that \(A = B = C = \lambda\). Since \(A + B + C = 12\), we have \(\lambda + \lambda + \lambda = 12\), which gives \(\lambda = 4\).

Now, we can find the estimates for \(\hat{p}^{MLE}_{12}\) by substituting \(\lambda\) into the equations:

\[
\begin{aligned}
(\hat{p}^{MLE}_{12})_1 &= \frac{A}{\lambda} = \frac{4}{4} = 1 \\
(\hat{p}^{MLE}_{12})_2 &= \frac{B}{\lambda} = \frac{4}{4} = 1 \\
(\hat{p}^{MLE}_{12})_3 &= \frac{C}{\lambda} = \frac{4}{4} = 1 \\
\end{aligned}
\]

Therefore, the estimate \(\hat{p}^{MLE}_{12}\) for \(\mathbf{p}\) given the data set \(\mathbf{x}\) is:

\((\hat{p}^{MLE}_{12})_1 = 1\), \((\hat{p}^{MLE}_{12})_2 = 1\), and \((\hat{p}^{MLE}_{12})_3 = 1\).

To find the MLE estimate for p given the data set x, we need to maximize the log-likelihood function.

First, let's calculate the log-likelihood function L_n(x, p):

L_n(x, p) = log(p_1^A * p_2^B * p_3^C)
= Alog(p_1) + Blog(p_2) + Clog(p_3)

Next, let's calculate the gradient of the log-likelihood function:

∇logL_n(x, p) = (∂/∂p_1)(Alog(p_1) + Blog(p_2) + Clog(p_3))i + (∂/∂p_2)(Alog(p_1) + Blog(p_2) + Clog(p_3))j + (∂/∂p_3)(Alog(p_1) + Blog(p_2) + Clog(p_3))k

= A/p_1 * i + B/p_2 * j + C/p_3 * k

Now, using the result from the previous problem where A = 5, B = 4, and C = 4, we can solve for p that satisfies the following equation:

∇logL_n(x, p) = λ[1 1 1]

Which can be written as:

5/p_1 * i + 4/p_2 * j + 4/p_3 * k = λ * [1 1 1]

Comparing the coefficients, we have:

5/p_1 = 4/p_2 = 4/p_3

Simplifying further:

p_1 = (5/4) * p_2 = (5/4) * p_3

From this equation, we can solve for p_1, p_2, and p_3 as follows:

p_1 = 5/(5 + 4 + 4) = 5/13
p_2 = 4/(5 + 4 + 4) = 4/13
p_3 = 4/(5 + 4 + 4) = 4/13

Therefore, the MLE estimate for p given the data set x is:

(\hat{p}^{MLE}_{12})_1 = 5/13
(\hat{p}^{MLE}_{12})_2 = 4/13
(\hat{p}^{MLE}_{12})_3 = 4/13