Now, suppose that we instead have the proper prior [mathjaxinline]\pi (\lambda ) \sim[/mathjaxinline] [mathjaxinline]\textsf{Exp}(a)[/mathjaxinline] ([mathjaxinline]a > 0[/mathjaxinline]). Again, just as in part (b): conditional on [mathjaxinline]\lambda[/mathjaxinline], we have observations [mathjaxinline]X _1[/mathjaxinline], [mathjaxinline]X _2[/mathjaxinline], [mathjaxinline]\cdots[/mathjaxinline], [mathjaxinline]X _{n}[/mathjaxinline] [mathjaxinline]\stackrel{\text {i.i.d}}{\sim }[/mathjaxinline] [mathjaxinline]\textsf{N}(\lambda , 1)[/mathjaxinline]. You may assume that [mathjaxinline]a < \displaystyle \sum _{i=1}^{n} X_ i[/mathjaxinline]. Compute the posterior distribution [mathjaxinline]\pi (\lambda | X_1, X_2, \ldots , X_ n)[/mathjaxinline], then provide the following statistics on the posterior distribution. Write Phi for the CDF function [mathjaxinline]\Phi ()[/mathjaxinline] and PhiInv for its inverse.

Use SumXi for [mathjaxinline]\sum _{i=1}^ n X_ i[/mathjaxinline].

median:

To compute the posterior distribution [mathjaxinline]\pi(\lambda | X_1, X_2, \ldots, X_n)[/mathjaxinline], we can use Bayes' theorem:

[mathjaxinline]\pi(\lambda | X_1, X_2, \ldots, X_n) \propto \pi(\lambda) \cdot f(X_1, X_2, \ldots, X_n | \lambda)[/mathjaxinline],

where [mathjaxinline]\pi(\lambda)[/mathjaxinline] is the prior distribution and [mathjaxinline]f(X_1, X_2, \ldots, X_n | \lambda)[/mathjaxinline] is the likelihood function.

Given that [mathjaxinline]\pi(\lambda)[/mathjaxinline] follows an exponential distribution [mathjaxinline]\textsf{Exp}(a)[/mathjaxinline] and [mathjaxinline]X_1, X_2, \ldots, X_n[/mathjaxinline] are observations drawn from a normal distribution [mathjaxinline]\textsf{N}(\lambda, 1)[/mathjaxinline], the likelihood function can be expressed as:

[mathjaxinline]f(X_1, X_2, \ldots, X_n | \lambda) = f(X_1|\lambda) \cdot f(X_2|\lambda) \cdot \ldots \cdot f(X_n|\lambda)[/mathjaxinline].

Since the observations are independent, we can write:

[mathjaxinline]f(X_1, X_2, \ldots, X_n | \lambda) = \prod_{i=1}^{n} f(X_i|\lambda)[/mathjaxinline].

Substituting the probability density function (PDF) of the normal distribution [mathjaxinline]\textsf{N}(\lambda, 1)[/mathjaxinline], we have:

[mathjaxinline]\begin{align*}
f(X_1, X_2, \ldots, X_n | \lambda) &= \prod_{i=1}^{n} \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{1}{2}(X_i - \lambda)^2\right) \\
&= \frac{1}{(2\pi)^{\frac{n}{2}}} \exp\left(-\frac{1}{2}\sum_{i=1}^{n}(X_i - \lambda)^2\right) \\
&= \frac{1}{(2\pi)^{\frac{n}{2}}} \exp\left(-\frac{1}{2}\left(\sum_{i=1}^{n}X_i^2 - 2\lambda\sum_{i=1}^{n}X_i + n\lambda^2\right)\right) \\
&= \frac{1}{(2\pi)^{\frac{n}{2}}} \exp\left(-\frac{1}{2}\sum_{i=1}^{n}X_i^2 + \lambda\sum_{i=1}^{n}X_i - \frac{n}{2}\lambda^2\right) \\
&\propto \exp\left(\lambda\sum_{i=1}^{n}X_i - \frac{n}{2}\lambda^2\right),
\end{align*}[/mathjaxinline].

Next, we need to compute the posterior distribution by multiplying the prior and the likelihood, and then normalizing it:

[mathjaxinline]\begin{align*}
\pi(\lambda | X_1, X_2, \ldots, X_n) &\propto \pi(\lambda) \cdot f(X_1, X_2, \ldots, X_n | \lambda) \\
&\propto \exp(-a\lambda) \cdot \exp\left(\lambda\sum_{i=1}^{n}X_i - \frac{n}{2}\lambda^2\right) \\
&\propto \exp\left(\lambda(\sum_{i=1}^{n}X_i - a - \frac{n}{2}\lambda)\right).
\end{align*}[/mathjaxinline]

We can recognize this as a normal distribution with mean [mathjaxinline]\frac{\sum_{i=1}^{n}X_i - a}{n}[/mathjaxinline] and variance [mathjaxinline]\frac{1}{n}[/mathjaxinline]:

[mathjaxinline]\pi(\lambda | X_1, X_2, \ldots, X_n) = \textsf{N}\left(\frac{\sum_{i=1}^{n}X_i - a}{n}, \frac{1}{n}\right).[/mathjaxinline]

The median of a normal distribution is given by its mean, so in this case, the median of the posterior distribution is [mathjaxinline]\frac{\sum_{i=1}^{n}X_i - a}{n}[/mathjaxinline].

To summarize, the statistics on the posterior distribution are as follows:

Median: [mathjaxinline]\frac{\sum_{i=1}^{n}X_i - a}{n}[/mathjaxinline]

To compute the posterior distribution [mathjaxinline]\pi (\lambda | X_1, X_2, \ldots , X_ n)[/mathjaxinline], we can use Bayes' theorem:

[mathjaxinline]\pi (\lambda | X_1, X_2, \ldots , X_ n) = \frac{{f(X_1, X_2, \ldots , X_ n | \lambda) \cdot \pi (\lambda)}}{{\int_{-\infty}^{\infty}{f(X_1, X_2, \ldots , X_ n | \lambda) \cdot \pi (\lambda) \, d\lambda}}}[/mathjaxinline]

where [mathjaxinline]f(X_1, X_2, \ldots , X_ n | \lambda)[/mathjaxinline] is the likelihood and [mathjaxinline]\pi (\lambda)[/mathjaxinline] is the prior distribution.

Given that [mathjaxinline]X_i[/mathjaxinline] are i.i.d. normally distributed with mean [mathjaxinline]\lambda[/mathjaxinline] and variance 1, the likelihood function is:

[mathjaxinline]f(X_1, X_2, \ldots , X_ n | \lambda) = \prod_{i=1}^{n}{f(X_i | \lambda)} = \prod_{i=1}^{n}{\frac{1}{{\sqrt{2\pi}}}\exp\left(-\frac{1}{2}(X_i - \lambda)^2\right)}[/mathjaxinline]

Substituting the likelihood and prior into Bayes' theorem, we have:

[mathjaxinline]\pi (\lambda | X_1, X_2, \ldots , X_ n) = \frac{{\prod_{i=1}^{n}{\frac{1}{{\sqrt{2\pi}}}\exp\left(-\frac{1}{2}(X_i - \lambda)^2\right)}} \cdot \frac{1}{a}\exp(-\frac{\lambda}{a})}{{\int_{-\infty}^{\infty}{\prod_{i=1}^{n}{\frac{1}{{\sqrt{2\pi}}}\exp\left(-\frac{1}{2}(X_i - \lambda)^2\right)}} \cdot \frac{1}{a}\exp(-\frac{\lambda}{a}) \, d\lambda}}}[/mathjaxinline]

Simplifying the expression inside the integral:

[mathjaxinline]\prod_{i=1}^{n}{\frac{1}{{\sqrt{2\pi}}}\exp\left(-\frac{1}{2}(X_i - \lambda)^2\right)} \cdot \frac{1}{a}\exp(-\frac{\lambda}{a}) = \frac{1}{{(2\pi)^{\frac{n}{2}}}} \cdot \exp\left(-\frac{1}{2}\sum_{i=1}^{n}{(X_i - \lambda)^2}\right) \cdot \frac{1}{a}\exp(-\frac{\lambda}{a})[/mathjaxinline]

The term [mathjaxinline]\exp\left(-\frac{1}{2}\sum_{i=1}^{n}{(X_i - \lambda)^2}\right)[/mathjaxinline] can be simplified as [mathjaxinline]\exp\left(-\frac{n\lambda^2 - 2\lambda\sum_{i=1}^{n}{X_i} + \sum_{i=1}^{n}{X_i^2}}{2}\right) = \exp\left(-\frac{n\lambda^2}{2} + \frac{\lambda\sum_{i=1}^{n}{X_i}}{2} - \frac{\sum_{i=1}^{n}{X_i^2}}{2}\right)[/mathjaxinline].

Plugging this back into the integral:

[mathjaxinline]\int_{-\infty}^{\infty}{\prod_{i=1}^{n}{\frac{1}{{\sqrt{2\pi}}}\exp\left(-\frac{1}{2}(X_i - \lambda)^2\right)}} \cdot \frac{1}{a}\exp(-\frac{\lambda}{a}) \, d\lambda \\ = \frac{1}{{(2\pi)^{\frac{n}{2}}}} \cdot \int_{-\infty}^{\infty}{\exp\left(-\frac{n\lambda^2}{2} + \frac{\lambda\sum_{i=1}^{n}{X_i}}{2} - \frac{\sum_{i=1}^{n}{X_i^2}}{2}\right) \cdot \frac{1}{a}\exp(-\frac{\lambda}{a}) \, d\lambda}[/mathjaxinline]

Now we can see that the integrand is proportional to the density of a normal distribution with mean [mathjaxinline]\frac{\sum_{i=1}^{n}{X_i} + na}{n + a}[/mathjaxinline] and variance [mathjaxinline]\frac{1}{n + a}[/mathjaxinline]. Therefore, the integral evaluates to 1.

Finally, the posterior distribution simplifies to:

[mathjaxinline]\pi (\lambda | X_1, X_2, \ldots , X_ n) \propto \exp\left(-\frac{n\lambda^2}{2} + \frac{\lambda\sum_{i=1}^{n}{X_i}}{2} - \frac{\sum_{i=1}^{n}{X_i^2}}{2}\right) \cdot \frac{1}{a}\exp(-\frac{\lambda}{a})[/mathjaxinline]

To find the median of the posterior distribution, we can use the cumulative distribution function (CDF) [mathjaxinline]\Phi(\cdot)[/mathjaxinline] to find the value of [mathjaxinline]\lambda[/mathjaxinline] that corresponds to a probability of 0.5. Since the CDF is not available in closed form for this distribution, we can use numerical methods to approximate it.

1. Compute [mathjaxinline]\sum_{i=1}^{n}{X_i}[/mathjaxinline] (SumXi).
2. Compute the median [mathjaxinline]\lambda[/mathjaxinline]:
- Set an initial guess for [mathjaxinline]\lambda[/mathjaxinline], e.g., [mathjaxinline]\lambda_0 = \frac{\sum_{i=1}^{n}{X_i} + na}{n + a}[/mathjaxinline]
- Use an iterative method such as the Newton-Raphson method or bisection method to find the value of [mathjaxinline]\lambda[/mathjaxinline] that satisfies [mathjaxinline]\Phi(\lambda) = 0.5[/mathjaxinline].
- Alternatively, you can use a numerical library or software to find the inverse of the CDF [mathjaxinline]\Phi^{-1}(0.5)[/mathjaxinline].

Note: The specific values of [mathjaxinline]X_i[/mathjaxinline], [mathjaxinline]n[/mathjaxinline], and [mathjaxinline]a[/mathjaxinline] are needed to obtain an exact result.