We have k coins. The probability of Heads is the same for each coin and is the realized value q of a random variable Q that is uniformly distributed on [0,1]. We assume that conditioned on Q=q, all coin tosses are independent. Let Ti be the number of tosses of the ith coin until that coin results in Heads for the first time, for i=1,2,…,k. (Ti includes the toss that results in the first Heads.)

You may find the following integral useful: For any non-negative integers k and m,

∫_0^1q^k(1−q)^m dq = k!m!/(k+m+1)!

1. Find the PMF of T1. (Express your answer in terms of t using standard notation.)
For t=1,2,…,
pT1(t)=
2. Find the least mean squares (LMS) estimate of Q based on the observed value, t, of T1. (Express your answer in terms of t using standard notation.)
E[Q∣T1=t]=

1. The PMF of T1 can be found by considering the probability that T1 takes on a certain value t. This can be calculated as the probability that the first coin toss results in Tails (1 - q), raised to the power of t-1 (as we need t-1 Tails before the first Heads), multiplied by the probability that the tth coin toss results in Heads (q).

Therefore, the PMF of T1 is given by:
pT1(t) = (1 - q)^(t-1) * q

2. The least mean squares (LMS) estimate of Q based on the observed value t of T1 can be calculated as the conditional expectation E[Q | T1 = t].

Using the definition of conditional expectation, we have:
E[Q | T1 = t] = ∫_0^1 q * p(Q=q | T1=t) dq

We can use Bayes' rule to find p(Q=q | T1=t):
p(Q=q | T1=t) = p(T1=t | Q=q) * p(Q=q) / p(T1=t)

Given that the probability of Heads is the same for each coin and is the realized value q of a random variable Q that is uniformly distributed on [0,1], we have:
p(T1=t | Q=q) = (1 - q)^(t-1) * q (from the PMF of T1 derived above)
p(Q=q) = 1 (since Q is uniformly distributed on [0,1])
p(T1=t) = ∫_0^1 (1 - q)^(t-1) * q dq (sum of all possible values of q that result in T1 taking on the value t)

Using the provided integral, we can calculate p(T1=t):
p(T1=t) = ∫_0^1 q^(t-1) * (1 - q) dq
= (t-1)! * 1! / (t+1)!
= (t-1)! / (t+1)(t)
= (t-1)!(t-1)! / (t+1)(t) (expressed in standard notation)

Substituting these results back into the expression for E[Q | T1=t], we have:
E[Q | T1=t] = ∫_0^1 q * (1 - q)^(t-1) * q dq / (t-1)!(t-1)! * (t+1)(t)
= ∫_0^1 q^2 * (1 - q)^(t-1) dq / (t-1)!(t-1)! * (t+1)(t)

This integral does not have a simple closed-form solution, so the LMS estimate of Q based on the observed value t of T1 is the integral mentioned above.
E[Q | T1=t] = ∫_0^1 q^2 * (1 - q)^(t-1) dq / (t-1)!(t-1)! * (t+1)(t)

1. To find the PMF of T1, we can calculate the probability that the first coin toss resulting in Heads occurs on the tth toss. Let's denote this probability as pT1(t).

For the first coin toss to result in Heads on the tth toss, we need all previous t-1 tosses to result in Tails, which occurs with probability (1-q)^(t-1), and the tth toss to result in Heads, which occurs with probability q. Therefore:

pT1(t) = (1-q)^(t-1) * q

where t = 1, 2, ...

2. To find the LMS estimate of Q based on the observed value t of T1, we can calculate the conditional expectation of Q given T1 = t. Let's denote this as E[Q|T1=t].

Since Q is uniformly distributed on [0,1], its expected value E[Q] is equal to 1/2. However, conditioned on T1 = t, we know that the first coin toss resulting in Heads occurred on the tth toss. This means that the probability distribution of Q is no longer uniform on [0,1], but is truncated at q* = (t-1)/t.

Therefore, the LMS estimate of Q given T1 = t is the expected value of Q truncated at q*:

E[Q|T1=t] = ∫_0^1 q * f(Q|T1=t) dq

where f(Q|T1=t) is the conditional PDF of Q given T1 = t.

Since Q is uniformly distributed on [0,1], its PDF is given by f(Q) = 1 for 0 ≤ Q ≤ 1. However, conditioned on T1 = t, we have q* ≤ Q ≤ 1.

Therefore, the conditional PDF of Q given T1 = t is:

f(Q|T1=t) = 1 / (1 - q*) = 1 / (1 - (t-1)/t) = t / t-1

Substituting this into the expression for E[Q|T1=t], we get:

E[Q|T1=t] = ∫_q*^1 q * (t / (t-1)) dq

= [(1/2) * (1 - (t-1)/t)^2 * (t/t-1)] / (t-1)

Simplifying this expression will give us the LMS estimate of Q based on the observed value t of T1.

1. To find the PMF of T1, we need to calculate the probability that it takes exactly t tosses until the first Heads for the first coin.

Let's break down the events that need to occur:
- The first t-1 tosses of the first coin must result in Tails.
- The tth toss of the first coin must result in Heads.

Since each coin toss is independent and has a probability of q for Heads, the probability of an individual toss resulting in Tails is (1 - q). Therefore, the probability of the first t-1 tosses resulting in Tails is (1 - q)^(t-1).

The tth toss must result in Heads, which has a probability of q.

Therefore, the PMF of T1 is given by:
pT1(t) = (1 - q)^(t-1) * q

2. The least mean squares (LMS) estimate of Q, denoted as E[Q|T1=t], is the conditional expectation of Q given that T1 is equal to t.

To find this estimate, we need to calculate the conditional expectation. We can use the formula for conditional expectation:

E[Q|T1=t] = ∫ q * p(Q=q|T1=t) dq

Since Q is uniformly distributed on [0,1], the conditional probability p(Q=q|T1=t) is equal to the conditional expectation of T1 being equal to t given that Q is equal to q, which is pT1(t).

Thus, the LMS estimate of Q based on the observed value t of T1 is:
E[Q|T1=t] = ∫ q * pT1(t) dq

Substituting the PMF of T1 obtained in part 1, we have:
E[Q|T1=t] = ∫ q * [(1 - q)^(t-1) * q] dq

Now, you can evaluate the integral using the given useful integral:
∫_0^1 q^k(1−q)^m dq = k!m!/(k+m+1)!

In this case, k = 1 and m = t-1. So, plugging those values into the integral formula, we get:
∫_0^1 q^1(1-q)^(t-1) dq = 1!(t-1)!/(1+t)!

Therefore, the least mean squares (LMS) estimate of Q based on the observed value t of T1 is:
E[Q|T1=t] = 1!(t-1)!/(1+t)!