Let N be a random variable with mean E[N]=m, and Var(N)=v; let A1, A2,… be a sequence of i.i.d random variables, all independent of N, with mean 1 and variance 1; let B1,B2,… be another sequence of i.i.d. random variables, all independent of N and of A1,A2,…, also with mean 1 and variance 1. Let A= ∑_(i=1)^NAi and B= ∑_(i=1)^NBi

1. Find the following expectations using the law of iterated expectations. Express each answer in terms of m and v, using standard notation.
E[AB]=
E[NA]=
2. Let N^=c1A+c2 be the LLMS estimator of N given A. Find c1 and c2 in terms of m and v.
c1=
c2=

part 1 both answers are v+m^2

part 2
c1 = v/(m+v)
c2 = m^2/(m+v)

1. E[AB] = E[E[AB|N]] = E[N * E[A] * E[B]] = E[N * E[A] * 1] = E[N * E[A]] = E[N * 1] = E[N] = m

E[NA] = E[E[NA|N]] = E[N * E[A|N]] = E[N * A] = E[N * (E[A] * N + c2)] = E[N * (1 * N + c2)] = E[N * (N + c2)] = E[N^2 + Nc2] = E[N^2] + E[Nc2] = Var(N) + c2 * E[N] = v + c2 * m

2. The LLMS estimator of N given A, N^ = c1 * A + c2.

Since E[N^ | A] = E[N | A], we can equate the coefficients of A and the constant term in both expressions to find c1 and c2.

E[N^ | A] = c1 * A + c2 = E[N | A]

Since N and A are independent, E[N | A] = E[N], so we have:

c1 * A + c2 = E[N]

Comparing terms, we can see that c1 = 1 and c2 = -m. Therefore:

c1 = 1
c2 = -m

1. To find E[AB] using the law of iterated expectations, we can start by conditioning on the value of N:

E[AB] = E[E[AB|N]]

Now, let's compute E[AB|N]. Since N is independent of both A and B, we can write:

E[AB|N] = NE[A]E[B]

Since E[A] = E[B] = 1 (given in the question), we have:

E[AB|N] = N(1)(1) = N

Substituting this back into the original equation, we get:

E[AB] = E[N] = m

So, E[AB] = m.

Next, let's find E[NA] using the law of iterated expectations. We can again start by conditioning on the value of N:

E[NA] = E[E[NA|N]]

Now, let's compute E[NA|N]. Similarly to the above, we have:

E[NA|N] = NE[A]

Substituting this back into the original equation, we get:

E[NA] = E[N*E[A]] = E[N]E[A] = m * 1 = m

So, E[NA] = m.

2. To find c1 and c2, the coefficients of the LLMS estimator of N given A (N^ = c1A + c2), we can minimize the mean squared error:

min E[(N - N^)^2]

Let's solve this optimization problem by taking partial derivatives wrt c1 and c2 and setting them equal to zero:

∂/∂c1 E[(N - N^)^2] = 0

Expanding the squared term and using the linearity of expectation, we have:

∂/∂c1 E[(N - (c1A + c2))^2] = 0

Applying the chain rule and simplifying, we get:

E[2(A(N - (c1A + c2)))] = 0

Since A is independent of N, we can write:

E[2(A(N - c1A - c2))] = 0

Expanding and rearranging terms, we have:

E[2(NA - c1A^2 - 2c2A)] = 0

By linearity of expectation, we can split this into three terms and simplify further:

2E[NA] - 2c1E[A^2] - 4c2E[A] = 0

Now, substituting the values we know from the question (E[NA] = m, E[A^2] = 1^2 + 1 = 2, E[A] = 1), we get:

2m - 2c1(2) - 4c2(1) = 0

Simplifying further, we have:

2m - 4c1 - 4c2 = 0

This is our first equation.

Next, taking the partial derivative wrt c2 and setting it to zero, we have:

∂/∂c2 E[(N - N^)^2] = 0

Using similar steps as above, we get:

2E[NB] - 2c1E[AB] - 4c2E[B] = 0

Since E[NB] = m (using the same reasoning as before), and E[AB] = m (as we calculated in the first part), we have:

2m - 2c1m - 4c2(1) = 0

Simplifying further, we have:

2m - 2c1m - 4c2 = 0

This is our second equation.

Solving the system of equations formed by the above two equations:

2m - 4c1 - 4c2 = 0
2m - 2c1m - 4c2 = 0

We can simplify the equations further:

2m - 4c1 - 4c2 = 0 ---> Divide by 2: m - 2c1 - 2c2 = 0
2m - 2c1m - 4c2 = 0 ---> Divide by 2: m - c1m - 2c2 = 0

Rearranging the second equation, we have:

m - c1m - 2c2 = 0
c1m - m + 2c2 = 0
(c1 - 1)m + 2c2 = 0

Since this equation holds for all m, we can equate the coefficients of m to zero:

c1 - 1 = 0 ---> c1 = 1

Substituting c1 = 1 into the second equation, we have:

1(1) - m + 2c2 = 0
1 - m + 2c2 = 0
2c2 = m - 1
c2 = (m - 1)/2

So, c1 = 1 and c2 = (m - 1)/2.

To find the expectations using the law of iterated expectations, we need to apply the concept of conditional expectation. Remember that for any two random variables X and Y, the law of iterated expectations states that E[XY] = E[E[XY|Y]], where E[XY|Y] is the conditional expectation of XY given Y.

1. E[AB]:

We can rewrite AB as (A)(B). Using the law of iterated expectations, we have:

E[AB] = E[E[AB|B]]

Since A and B are independent of each other, E[AB|B] = A * E[B].

But we know that E[B] = 1, so:

E[AB] = E[E[AB|B]] = E[A * E[B]] = E[A * 1] = E[A]

Now, using the law of iterated expectations again, we have:

E[A] = E[E[A|N]]

Since A is independent of N, E[A|N] = E[A]. Therefore:

E[A] = E[E[A|N]] = E[A]

So, E[AB] = E[A] = E[∑(i=1)^(N)Ai]

2. E[NA]:

Using the law of iterated expectations, we have:

E[NA] = E[E[NA|N]]

Since N is a random variable, we can factor it out of the conditional expectation:

E[NA] = N * E[E[A|N]]

Since A is independent of N, E[A|N] = E[A]. Therefore:

E[NA] = N * E[E[A|N]] = N * E[A]

Now, let's find c1 and c2 in terms of m and v for the LLMS estimator N^=c1A+c2.

We know that the LLMS estimator minimizes the mean square error (MSE) between N and the estimate N^. Therefore, we need to find the values of c1 and c2 that minimize the MSE.

The MSE is defined as: MSE = E[(N - N^)^2].

Expanding the square term, we have: MSE = E[(N - (c1A + c2))^2]

Since A and N are independent, we can calculate this expectation as follows:

MSE = E[(N - (c1A + c2))^2] = E[N^2 - 2c1AN - 2c2N + (c1A + c2)^2]

To minimize the MSE, we take the partial derivatives with respect to c1 and c2 and set them equal to zero:

∂MSE/∂c1 = -2E[AN] + 2c1E[A^2] + 2c2E[A] = 0

∂MSE/∂c2 = -2E[N] + 2c1E[A] + 2c2 = 0

Solving these equations simultaneously, we get:

c1 = E[AN] / E[A^2]

c2 = E[N] - c1E[A]

Substituting the expressions for E[A] and E[AN] from earlier:

c1 = (E[NA]) / E[A^2] = (N * E[A]) / E[A^2] = N * (E[A] / E[A^2])

c2 = E[N] - c1E[A] = E[N] - N * (E[A] / E[A^2]) * E[A] = E[N] - N * (E[A]^2 / E[A^2])

Therefore, c1 = N * (E[A] / E[A^2]) and c2 = E[N] - N * (E[A]^2 / E[A^2]), both in terms of m and v.