We are given a biased coin, where the (random) bias Θ is uniformly distributed on [0,1] . We toss this coin, until we see Heads for the first time. Our goal is to estimate Θ using our observation.

1. Find the LMS estimator (ΘˆLMS) for Θ . Your answer should be a function of N , the number of tosses up to and including the toss at which Heads were observed. (Make sure to use upper case N when entering your answer.)

2. Find E[Θ/(ΘˆLMS)] .

1. 2/(N+2)

2. 1

To find the LMS estimator (ΘˆLMS) for Θ, we need to consider the conditional probability distribution of Θ given the number of tosses N until we see Heads for the first time.

Let's start with part 1:

1. The probability distribution of Θ given N = n is given by the following formula:

f(Θ|N=n) = 2(1-Θ)^(n-1)

This is because the probability of obtaining n-1 tails in a row before the first heads is (1-Θ)^(n-1), and the probability of getting heads on the n-th toss is Θ.

Now, to find the LMS estimator, we need to find the value of Θ for which f(Θ|N=n) is maximum.

Let's calculate the derivative of f with respect to Θ and set it equal to zero to find the maximum:

df/dΘ = d(2(1-Θ)^(n-1))/dΘ
= -(n-1)2(1-Θ)^(n-2)

Setting df/dΘ = 0, we get:

-(n-1)2(1-Θ)^(n-2) = 0

Since (n-1) is always positive, the only way for this equation to be satisfied is when (1-Θ)^(n-2) = 0.

Solving for 1-Θ = 0, we get Θ = 1.

Therefore, the LMS estimator for Θ when N = n is ΘˆLMS = 1.

Answer to part 1: ΘˆLMS = 1.

Moving on to part 2:

2. We need to find the expected value of Θ/(ΘˆLMS).

Since we already found that ΘˆLMS = 1, our calculation simplifies to finding E[Θ/1] = E[Θ].

To find E[Θ], we need to calculate the integral of Θ times the probability density function f(Θ).

Since Θ is uniformly distributed in the range [0,1], its probability density function is f(Θ) = 1 for 0 ≤ Θ ≤ 1.

Therefore, the expected value of Θ is given by:

E[Θ] = ∫Θ * f(Θ) dΘ
= ∫Θ * 1 dΘ (since f(Θ) = 1)
= ∫Θ dΘ
= Θ^2/2 (evaluating the integral)

Answer to part 2: E[Θ/(ΘˆLMS)] = E[Θ] = Θ^2/2.

1. To find the LMS (Least Mean Squares) estimator for Θ, we need to define a suitable loss function. In this case, since we want to estimate Θ using our observations, we can use the squared error loss function.

The squared error loss function is given by L(Θ, Θ^) = (Θ - Θ^)^2, where Θ is the true value of Θ and Θ^ is our estimator.

Let's denote the outcome of each coin toss by X_i, where X_i = 1 if the i-th toss results in Heads and X_i = 0 if it results in Tails.

The number of tosses up to and including the toss at which Heads is observed is N. So, we can write our estimator as:

Θ^LMS = sum(X_i) / N

Since the bias Θ is uniformly distributed on [0,1], the probability of getting Heads in any toss is Θ.

Therefore, the expected value of X_i is E[X_i] = Θ.

Taking the expectation of our estimator, we get:

E[Θ^LMS] = E[sum(X_i) / N]

Since the tosses are independent, we have:

E[Θ^LMS] = sum(E[X_i]) / N

Using the fact that E[X_i] = Θ, we get:

E[Θ^LMS] = sum(Θ) / N

Since Θ is uniformly distributed on [0,1], we can integrate it over the range [0,1] to find the expected value:

E[Θ^LMS] = 1/N * integral from 0 to 1 of Θ dΘ

Evaluating the integral, we get:

E[Θ^LMS] = 1/N * [Θ^2/2] evaluated from 0 to 1

Simplifying further, we get:

E[Θ^LMS] = 1/N * (1/2)

Therefore, the LMS estimator for Θ is given by:

Θ^LMS = sum(X_i) / N

And the expected value E[Θ/(ΘˆLMS)] is:

E[Θ/(ΘˆLMS)] = E[Θ] / E[Θ^LMS] = Θ / (1/N * (1/2)) = 2Θ