As in the last video, let X=delta+W, where delta and W are independent normal random variables and W has mean zero.

1. Assume that W has positive variance. Are X and W independent?
2. Find the MAP estimator of delta based on X if delta follows Normal(1,1) and W follows Normal(0,1), and evaluate the corresponding estimate if X=2.
3. Find the MAP estimator of delta based on X if delta follows Normal(0,1) and W follows Normal(0,4) and evaluate the corresponding estimate if X=2.
4. For this part of the problem, suppose instead that X=2*delta+3*W, where delta and W are standard normal random variables. Find the estimator of delta based on X under this model and evaluate the corresponding estimate if X=2.

1. To determine whether X and W are independent, we need to check if their joint probability density function (pdf) factorizes into the product of their individual pdfs.

Given that X = delta + W, we can write the joint pdf of X and W as:

f(X, W) = f(delta + W, W)

Since delta and W are independent, the joint pdf can be expressed as the product of their individual pdfs:

f(X, W) = f(delta)f(W)

However, if W has positive variance, the variance of X is greater than zero. Since the variance of X is not zero, we can conclude that X and W are NOT independent.

2. The Maximum A Posteriori (MAP) estimator for delta based on X can be found by maximizing the posterior probability of delta given X. Using Bayes' theorem, the posterior probability can be written as:

P(delta | X) ∝ P(X | delta) * P(delta)

Given that delta follows Normal(1,1) and W follows Normal(0,1), we can write the likelihood of X given delta as:

P(X | delta) = f(X | delta)

Substituting the values into the expression, we have:

P(X | delta) = f(X | delta) = f(delta + W | delta)

Since W follows Normal(0,1), the pdf of W is:

f(W) = (1 / sqrt(2π)) * exp(-(W - 0)^2 / (2 * 1^2))

Substituting the expressions for X and W, we get:

f(X | delta) = (1 / sqrt(2π)) * exp(-(X - delta)^2 / 2)

The prior probability distribution of delta is given as:

P(delta) = f(delta) = (1 / sqrt(2π)) * exp(-(delta - 1)^2 / 2)

To find the MAP estimate of delta, we need to maximize the posterior probability:

P(delta | X) ∝ P(X | delta) * P(delta)

Taking the logarithm of both sides, we have:

log(P(delta | X)) ≈ log(P(X | delta)) + log(P(delta))

To find the value of delta that maximizes this expression, we differentiate with respect to delta, set the derivative equal to zero, and solve for delta.

d(log(P(delta | X))) / d(delta) = 0

Substituting the likelihood and prior expressions and simplifying, we have:

d(log(f(X | delta)) + log(f(delta))) / d(delta) = 0

Simplifying further, we get:

d(-0.5*(X - delta)^2 - 0.5*(delta - 1)^2) / d(delta) = 0

Expanding and rearranging the expression, we have:

X - delta - (delta - 1) = 0

Simplifying, we get:

X - 2 delta + 1 = 0

Solving for delta, we find:

delta = (X + 1) / 2

Given X = 2, we can evaluate the corresponding estimate of delta:

delta = (2 + 1) / 2 = 1.5

Therefore, the MAP estimate of delta based on X = 2 is 1.5.

3. Following a similar approach as in part 2, we can find the MAP estimator of delta based on X if delta follows Normal(0,1) and W follows Normal(0,4).

The likelihood of X given delta can be expressed as:

P(X | delta) = f(delta + W | delta)

Since W follows Normal(0,4), the pdf of W is:

f(W) = (1 / sqrt(2π * 4)) * exp(-(W - 0)^2 / (2 * 4))

Using Bayes' theorem and substituting the expressions for X, delta, and W, we have:

log(P(delta | X)) ≈ log(P(X | delta)) + log(P(delta))

Differentiating with respect to delta, setting the derivative equal to zero, and solving for delta, we get:

X - 2 delta = 0

Simplifying, we have:

X = 2 delta

Given X = 2, we can evaluate the corresponding estimate of delta:

delta = X / 2 = 2 / 2 = 1

Therefore, the MAP estimate of delta based on X = 2 is 1.

4. In this case, X = 2*delta + 3*W, where delta and W are standard normal random variables.

To find the estimator of delta based on X under this model, we can use the method of least squares. The estimator that minimizes the sum of squared errors is the one that we seek.

The squared error can be defined as:

E(delta) = (X - 2*delta - 3*W)^2

To minimize the error, we differentiate with respect to delta, set the derivative equal to zero, and solve for delta:

d(E(delta)) / d(delta) = 0

Expanding and simplifying the expression, we have:

(X - 2*delta - 3*W)(-2) = 0

Simplifying further, we get:

X - 2*delta - 3*W = 0

Solving for delta, we find:

delta = (X - 3*W) / 2

Given X = 2, we can evaluate the corresponding estimate of delta. Since delta and W are standard normal random variables, their values are 0 on average. Therefore:

delta = (2 - 3*0) / 2 = 2 / 2 = 1

Therefore, the estimator of delta based on X = 2, under the model X = 2*delta + 3*W, is 1.

1. Yes, X and W are independent because they are defined as independent random variables.

2. To find the MAP estimator of delta based on X, we need to find the value of delta that maximizes the posterior probability P(delta|X). Using Bayes' theorem, we have:

P(delta|X) = P(X|delta) * P(delta) / P(X)

Since delta follows Normal(1,1) and W follows Normal(0,1), we can write:

P(X|delta) = P(delta+W=2) = P(W=2-delta) = Normal(2-delta, 1)

P(delta) = Normal(1,1)

P(X) = ∫ P(X|delta) * P(delta) d(delta) = ∫ Normal(2-delta, 1) * Normal(1,1) d(delta)

To find the MAP estimator, we want to find the value of delta that maximizes P(delta|X). Thus, we need to find the delta that maximizes the posterior probability expression:

P(delta|X) = P(X|delta) * P(delta) / P(X)

The MAP estimator is the value of delta that maximizes P(delta|X). In order to evaluate the corresponding estimate, we need to substitute X=2 into the posterior probability expression and evaluate the resulting expression.

3. Following similar steps as in the previous part, we now have:

P(delta|X) = P(X|delta) * P(delta) / P(X)

P(X|delta) = P(delta+W=2) = P(W=2-delta) = Normal(2-delta, 4)

P(delta) = Normal(0,1)

P(X) = ∫ P(X|delta) * P(delta) d(delta) = ∫ Normal(2-delta, 4) * Normal(0,1) d(delta)

Using these expressions, we can find the MAP estimator of delta based on X, and evaluate the corresponding estimate when X=2.

4. In this case, X=2*delta+3*W. Since delta and W are standard normal random variables, they have mean zero and variance one. To find the estimator of delta based on X, we want to find the value of delta that minimizes the mean squared error (MSE). The MSE is given by:

MSE = E((delta_est - delta)^2)

We can minimize the MSE by taking the derivative of the MSE with respect to delta_est and setting it equal to zero. Solving this equation will give us the estimator of delta.

To evaluate the corresponding estimate when X=2, we substitute X=2 into the estimator of delta expression and solve for delta_est.