We are given a biased coin, where the (random) bias Θ is uniformly distributed on [0,1] . We toss this coin, until we see Heads for the first time. Our goal is to estimate Θ using our observation.

Find the LMS estimator ΘˆLMS for Θ . Your answer should be a function of N , the number of tosses up to and including the toss at which Heads were observed. (Make sure to use upper case N when entering your answer.)

Find E[Θ/ΘˆLMS]

Why did the coin go to therapy? It had a lot of unresolved issues!

The LMS estimator, ΘˆLMS, for Θ is given by 1/N, where N is the number of tosses up to and including the toss at which Heads were observed.

So, ΘˆLMS = 1/N.

Now let's find E[Θ/ΘˆLMS]:

E[Θ/ΘˆLMS] = E[Θ * N]

Since Θ ~ U(0,1), the expected value of Θ is 1/2.

E[Θ/ΘˆLMS] = (1/2) * E[N]

To find E[N], we can use the fact that the probability of observing Heads on the k-th toss is (1/2)^(k-1).

E[N] = Σ(k=1 to infinity) k * (1/2)^(k-1)

This is an infinite geometric series with common ratio 1/2. The sum of this series is given by 1/(1 - 1/2)^2, which equals 2.

Therefore, E[N] = 2.

E[Θ/ΘˆLMS] = (1/2) * E[N] = (1/2) * 2 = 1

So, the expected value of Θ/ΘˆLMS is 1.

To find the LMS estimator for Θ, we can use the concept of maximum likelihood estimation.

Let's denote the LMS estimator as Θ̂LMS.

Step 1:
First, let's calculate the probability of getting a sequence of N-1 tails and 1 head, given a bias Θ.

P(X_1=T, X_2=T, ..., X_N-1=T, X_N=H | Θ) = (1-Θ)^(N-1) * Θ

Step 2:
Next, let's calculate the likelihood function, which is the probability of observing the given sequence of N tosses for a range of possible biases:

L(Θ) = P(X_1=T, X_2=T, ..., X_N-1=T, X_N=H | Θ) = (1-Θ)^(N-1) * Θ

Step 3:
To find the LMS estimator, we need to maximize the likelihood function L(Θ) with respect to Θ. Taking the derivative and setting it to zero will give us the maximum likelihood estimator.

d/dΘ [L(Θ)] = d/dΘ [(1-Θ)^(N-1) * Θ] = (1-Θ)^(N-1) - (N-1) * (1-Θ)^(N-2) * Θ = 0

Simplifying the equation, we get:

(1-Θ)^(N-1) = (N-1) * (1-Θ)^(N-2) * Θ

(1-Θ) = (N-1) * Θ

Θ = 1 / N

Therefore, the LMS estimator for Θ, Θ̂LMS, is given by:

Θ̂LMS = 1 / N

Now, let's find E[Θ/Θ̂LMS].

E[Θ/Θ̂LMS] = E[Θ * N] [Dividing by Θ̂LMS is equivalent to multiplying by N]

Taking the expectation of both sides:

E[E[Θ * N | Θ]] = E[Θ * E[N | Θ]]

Since N is the number of tosses until we see heads, it follows a geometric distribution with parameter Θ:

E[N | Θ] = 1/Θ

Substituting this into the equation, we get:

E[Θ/Θ̂LMS] = E[Θ * (1/Θ)]

Simplifying further, we get:

E[Θ/Θ̂LMS] = E[1] = 1

Therefore, E[Θ/Θ̂LMS] = 1.

To find the LMS (Least Mean Squares) estimator for Θ, we can use the method of moments. Let X be a random variable that represents the number of tosses until we see Heads for the first time.

Since the bias Θ is uniformly distributed on [0,1], the probability of getting Heads on any single toss is Θ. Therefore, X follows a geometric distribution with success probability Θ.

The probability mass function (PMF) of X is given by:

P(X = k) = (1 - Θ)^(k-1) * Θ

To find the expected value E[X], we compute:

E[X] = Σ(k=1 to ∞) k * (1 - Θ)^(k-1) * Θ

Using the formula for the sum of an infinite geometric series, we simplify the expression:

E[X] = 1/Θ

Now, the LMS estimator ΘˆLMS is defined as the reciprocal of the expected value of X:

ΘˆLMS = 1/E[X] = Θ

Therefore, the LMS estimator for Θ is simply Θ itself.

Now, to find E[Θ/ΘˆLMS], we substitute ΘˆLMS = Θ into the expression:

E[Θ/ΘˆLMS] = E[Θ/Θ] = E[1] = 1

Hence, the expected value of Θ divided by the LMS estimator ΘˆLMS is 1.