We are given a biased coin, where the (random) bias Θ is uniformly distributed on [0,1] . We toss this coin, until we see Heads for the first time. Our goal is to estimate Θ using our observation.

1. Find the LMS estimator ΘˆLMS for Θ . Your answer should be a function of N , the number of tosses up to and including the toss at which Heads were observed. (Make sure to use upper case N when entering your answer.)

2. Find E[Θ/ΘˆLMS] .

To find the LMS (Least Mean Squares) estimator for Θ, we need to consider the probabilities of obtaining heads on each toss. Since the bias Θ is uniformly distributed on [0,1], the probability of obtaining heads on any single toss is Θ.

1. The LMS estimator Θ̂LMS is the estimator that minimizes the mean squared error (MSE). To find it, we need to determine the probability distribution of the number of tosses N until the first heads appear.

Let's examine this probability distribution step by step:

- The probability of obtaining heads on the first toss is Θ.
- The probability of getting tails on the first toss and then obtaining heads on the second toss is (1-Θ) * Θ.
- The probability of getting tails on the first two tosses and then obtaining heads on the third toss is (1-Θ)^2 * Θ.
- Continuing this pattern, the probability of obtaining heads on the Nth toss is (1-Θ)^(N-1) * Θ.

We can see that the distribution of N follows a geometric distribution with parameter (1-Θ). The expected value of a geometric distribution with parameter p is 1/p. Thus, the expected value E[N] of the number of tosses until the first heads appear is 1/(1-Θ).

Now, we know that the LMS estimator Θ̂LMS is the reciprocal of the expected value of N, so Θ̂LMS = 1/E[N] = 1/(1/(1-Θ)) = 1-Θ.

Therefore, the LMS estimator Θ̂LMS for Θ is 1-Θ.

2. To find E[Θ/Θ̂LMS], we need to substitute Θ̂LMS = 1-Θ into the expression.

E[Θ/Θ̂LMS] = E[Θ/(1-Θ)]

However, calculating this expectation directly is a bit more involved. Instead, we can use the law of total expectation to break it down.

Let's consider two cases:

Case 1: When the first toss is heads (probability Θ):
In this case, the estimator is Θ̂LMS = 1-Θ. Therefore, E[Θ/Θ̂LMS] = Θ/(1-Θ).

Case 2: When the first toss is tails (probability 1-Θ):
In this case, we need to condition on the next toss being heads. Since we observed tails on the first toss, there are two possible outcomes:
a) We obtain heads on the second toss (probability Θ).
b) We obtain tails on the second toss, and now we are in the same situation as before, expecting the first heads in N-1 more tosses (probability (1-Θ)^(N-2)).

E[Θ/Θ̂LMS] = (1-Θ) * (Θ/(1-Θ)) + (1-Θ)^(N-2) * E[Θ/Θ̂LMS]

We can solve this equation for E[Θ/Θ̂LMS].

Note: The calculation of E[Θ/Θ̂LMS] may involve summations and algebraic manipulations, depending on the specific value of N.