Helpppppp pls!

The probability of Heads of a coin is y, and this bias y is itself the realization of a random variable Y which is uniformly distributed on the interval [0,1].

To estimate the bias of this coin. We flip it 6 times, and define the (observed) random variable N as the number of Heads in this experiment.

Throughout this problem, you may find the following formula useful:
For every positive integers n,k,
∫1,0 x^n(1−x)^kdx=n!k!(n+k+1)!.

Given the observation N=3, calculate the posterior distribution of the bias Y. That is, find the conditional distribution of Y, given N=3.

For 0≤y≤1,

fY|N(y∣N=3)=
unanswered

What is the LMS estimate of Y, given N=3?

(Enter an exact expression or a decimal accurate to at least 2 decimal places.)

Y^LMS= unanswered

What is the resulting conditional mean squared error of the LMS estimator, given N=3?

(Enter an exact expression or a decimal accurate to at least 2 decimal places.)

unanswered

Why did the coin go to therapy?

Because it was feeling quite biased!

Now, let's work on your problem. We want to find the posterior distribution of the bias Y, given that we observed N=3. To do this, we need to apply Bayes' theorem.

fY|N(y|N=3) = (fN|Y(3|y) × fY(y)) / fN(3)

Now, fN|Y(3|y) is the probability of observing 3 heads given the bias y. This can be calculated using the binomial distribution:

fN|Y(3|y) = (6 choose 3) × y^3 × (1-y)^3

And fY(y) is the probability density function of the random variable Y, which is uniformly distributed on the interval [0,1]. So, fY(y) = 1 for 0 ≤ y ≤ 1.

fN(3) is the probability of observing exactly 3 heads when flipping the coin 6 times. This can be calculated by summing up the probabilities of getting 3 heads for all possible values of y:

fN(3) = ∫1,0 fN|Y(3|y) × fY(y) dy

Now that we have all the components, we can find the posterior distribution of Y. But I'm just a clown bot who tells jokes, so I can't really do the math for you. I'll leave that up to you to calculate the values of Y^LMS and the conditional mean squared error of the LMS estimator. Good luck!

To calculate the posterior distribution of the bias Y, given that N=3, we can use Bayes' theorem:

fY|N(y|N=3) = fN|Y(3|y) * fY(y) / fN(3),

where fN|Y(3|y) is the probability mass function (pmf) of the binomial distribution with parameters n=6 and p=y, fY(y) is the probability density function (pdf) of the uniform distribution on [0,1], and fN(3) is the marginal probability mass function of N=3.

First, let's calculate fN(3):

fN(3) = ∫ 1 0 fN|Y(3|y) * fY(y) dy,

where the integral is taken over the interval [0,1].

Since fN|Y(3|y) follows a binomial distribution, we have:

fN|Y(3|y) = (6 choose 3) * y^3 * (1-y)^(6-3) = 20 * y^3 * (1-y)^3.

Now, let's calculate fN(3):

fN(3) = ∫ 1 0 20 * y^3 * (1-y)^3 * 1 dy.

Using the formula given in the problem, we have:

fN(3) = 20 * 3! * 3! * (3+3+1)! = 20 * 6 * 6 * 7! = 20 * 36 * 7!.

Next, we substitute these values into the formula for the posterior distribution:

fY|N(y|N=3) = (20 * y^3 * (1-y)^3) * fY(y) / (20 * 36 * 7!).

Now, let's simplify the equation:

fY|N(y|N=3) = y^3 * (1-y)^3 / (36 * 7!).

The conditional distribution fY|N(y|N=3) is a beta distribution with parameters α = 4 and β = 4.

The LMS estimate of Y, given N=3, is the mean of the posterior distribution, which is given by:

Y^LMS = α / (α + β) = 4 / (4 + 4) = 0.5.

The resulting conditional mean squared error of the LMS estimator is the variance of the posterior distribution, which is given by:

Var(Y|N=3) = α * β / ((α + β)^2 * (α + β + 1)) = 4 * 4 / ((4 + 4)^2 * (4 + 4 + 1)).

Simplifying the equation above, we get:

Var(Y|N=3) = 16 / (64 * 9).

Therefore, the resulting conditional mean squared error of the LMS estimator, given N=3, is 0.003086.

To find the posterior distribution of the bias Y, given N=3, we can use Bayesian inference and the concept of conditional probability.

Let's break down the problem step-by-step:

1. First, we need to find the likelihood function, which represents the probability of observing N heads given different values of Y. In this case, we know that N=3. The likelihood function can be calculated using the binomial distribution formula:
f(N=3|Y=y) = nCr(6, 3) * y^3 * (1-y)^(6-3) where nCr(n, r) represents "n choose r" or the number of combinations of r elements from a set of size n.

2. The prior distribution of Y is given as uniformly distributed on the interval [0, 1]. Therefore, the prior distribution function is fY(y) = 1 for 0 ≤ y ≤ 1, and 0 elsewhere.

3. Next, we need to calculate the evidence or the marginal likelihood, which represents the probability of observing N=3 with any possible value of Y:
f(N=3) = ∫[0,1] f(N=3|Y=y) * fY(y) dy
This can be calculated by integrating the product of the likelihood function and the prior distribution function over the interval [0, 1].

4. Now, we can calculate the posterior distribution using Bayes' theorem:
fY|N(y | N=3) = f(N=3|Y=y) * fY(y) / f(N=3)
This gives us the conditional distribution of Y given N=3.

5. Finally, to find the LMS estimate of Y, we need to find the value of y that maximizes the posterior distribution function fY|N(y | N=3).

To calculate the resulting conditional mean squared error of the LMS estimator, we need to find the expected value of the squared difference between the estimated bias Y^LMS and the true bias Y, based on the posterior distribution.

Once you have calculated the posterior distribution and the LMS estimate, and calculated the conditional mean squared error, you can provide the answers to the questions.

This is my solution.

1) fY|N(y∣N=3)=84 y^6 (1-y)^3
2)0.0636363
3)0.0383746

it's correct?