A fair coin is tossed repeatedly and independently. We want to determine the expected number of tosses until we first observe Tails immediately preceded by Heads. To do so, we define a Markov chain with four states, {S,H,T,HT}, where S is a starting state, H indicates Heads on the current toss, T indicates Tails on the current toss (without Heads on the previous toss), and HT indicates Heads followed by Tails over the last two tosses. This Markov chain is illustrated below:

1. What is the expected number of tosses until we first observe Tails immediately preceded by Heads? Hint: Solve the corresponding mean first passage time problem for our Markov chain.

- unanswered

2. Assuming that we have just observed Tails immediately preceded by Heads, what is the expected number of additional tosses until we next observe Tails immediately preceded by Heads?

- unanswered

Next, we want to answer similar questions for the event that Tails is immediately preceded by Tails. Set up a new Markov chain from which you can calculate the expected number of tosses until we first observe Tails immediately preceded by Tails.

3. What is the expected number of tosses until we first observe Tails immediately preceded by Tails?

- unanswered

4. Assuming that we have just observed Tails immediately preceded by Tails, what is the expected number of additional tosses until we again observe Tails immediately preceded by Tails?

1. 4

2. 4
3. 6
4. 4

Oh boy, math talk! I'm not great with numbers, but I'll give it my best shot!

1. The expected number of tosses until we first observe Tails immediately preceded by Heads can be found by solving the mean first passage time problem for the Markov chain. I'll leave that to the mathemagicians out there!

2. Assuming we have just observed Tails immediately preceded by Heads, the expected number of additional tosses until we next observe Tails immediately preceded by Heads... hmm... I'm not sure about that one, but maybe the coin just needs a break from all these flips?

3. Now onto the event of Tails immediately preceded by Tails! The expected number of tosses until we first observe this can be calculated using another Markov chain. I hope it's not confusing having multiple chains going on here. We don't want to accidentally start a jewelry store!

4. Assuming we have just observed Tails immediately preceded by Tails, the expected number of additional tosses until we again observe Tails immediately preceded by Tails... well, let's hope it doesn't take too long. Maybe the coin just really likes to wear its tail twice in a row!

Remember, in all of these calculations, it's important to keep a sense of humor! Math can get a bit serious sometimes, so don't forget to add a dash of fun to your equations!

To answer these questions, we need to determine the expected number of tosses until we reach a particular state in the Markov chain. In this case, we are interested in the states "HT" (Heads followed by Tails) and "TT" (Tails followed by Tails).

Let's start with the first question, which asks for the expected number of tosses until we first observe Tails immediately preceded by Heads.

1. What is the expected number of tosses until we first observe Tails immediately preceded by Heads?

To solve this problem, we can set up a system of equations using the concept of mean first passage time in Markov chains. Let's define the following variables:

- E(S): expected number of tosses until we reach state S
- p: probability of moving from one state to another

Using these variables, we can set up the following equations:

E(S) = 1 + p(E(H) + E(T))
E(H) = 1 + p(E(H) + E(T))
E(T) = 1 + p(E(S) + E(HT))

Note that we set the initial state S to 1, as we are already one toss in at the starting state.

Since our Markov chain has four states, we need one more equation:

E(S) + E(H) + E(T) + E(HT) = 1

Solving these equations will give us the expected number of tosses until we first observe Tails immediately preceded by Heads.

Moving on to the second question:

2. Assuming that we have just observed Tails immediately preceded by Heads, what is the expected number of additional tosses until we next observe Tails immediately preceded by Heads?

In this case, we can skip the state S (starting state) and rewrite the equations as follows:

E(H) = 1 + p(E(H) + E(T))
E(T) = 1 + p(E(HT))

Notice that we only consider the states H and T, as we have already passed the starting state S.

Solving these equations will give us the expected number of additional tosses until we next observe Tails immediately preceded by Heads.

Moving on to the third question:

3. What is the expected number of tosses until we first observe Tails immediately preceded by Tails?

We need to set up a new Markov chain for this event. The states of this new Markov chain will be S, T1, T2, T3, T4, ..., where T1 represents the state Tails on the current toss without Tails on the previous toss, T2 represents Tails on the current toss without Tails on the previous two tosses, and so on.

We can set up a similar system of equations as before, considering the probabilities and expected number of tosses for each state. The equations might be slightly different, as the transitions and probabilities will vary depending on the specific Markov chain for this event.

Solving these equations will give us the expected number of tosses until we first observe Tails immediately preceded by Tails.

Finally, let's address the fourth question:

4. Assuming that we have just observed Tails immediately preceded by Tails, what is the expected number of additional tosses until we again observe Tails immediately preceded by Tails?

Similarly, we can skip the starting state S and rewrite the equations considering only the states related to the event "Tails immediately preceded by Tails".

Solving these equations will give us the expected number of additional tosses until we again observe Tails immediately preceded by Tails.

To determine the expected number of tosses until we first observe Tails immediately preceded by Heads, we need to solve the corresponding mean first passage time problem for the given Markov chain.

1. Expected number of tosses until Tails immediately preceded by Heads:
To solve this problem, we need to calculate the mean first passage time from the starting state S to the state HT. This represents the expected number of tosses until we first observe Tails immediately preceded by Heads.

We can set up a system of equations to solve for the mean first passage time. Let's denote the mean first passage time from state i to state j as M(i,j). The following equations can be derived for this Markov chain:

M(S, HT) = 1 + 0.5*M(H, HT) + 0.5*M(T, HT)
M(H, HT) = 1 + 0.5*M(H, HT) + 0.5*M(T, HT)
M(T, HT) = 1 + 0.5*M(H, HT) + 0.5*M(T, HT)
M(HT, HT) = 0

Here, the subscript HT indicates the state we want to reach, and the equations represent the expected number of tosses from each state to reach the state HT. We can solve these equations to find the value of M(S, HT).

2. Expected number of additional tosses until Tails immediately preceded by Heads:
Once we have observed Tails immediately preceded by Heads, we need to find the expected number of additional tosses until we observe Tails immediately preceded by Heads again. To calculate this, we need to find the mean recurrence time for the state HT. The mean recurrence time is the expected number of additional tosses until returning to a certain state.

We can set up a similar system of equations to solve for the mean recurrence time. Again, let's denote the mean recurrence time for state i as R(i). The following equations can be derived:

R(HT) = 1 + 0.5*R(H) + 0.5*R(T)
R(H) = 1 + 0.5*R(H) + 0.5*R(T)
R(T) = 1 + 0.5*R(H) + 0.5*R(T)
R(S) = M(S, HT) + 0.5*R(H) + 0.5*R(T)

Here, R(HT) represents the mean recurrence time for the state HT, and R(S) represents the mean recurrence time for the starting state S. By solving these equations, we can find the value of R(HT), which represents the expected number of additional tosses until Tails immediately preceded by Heads is observed again.

3. Expected number of tosses until Tails immediately preceded by Tails:
To answer this question, we need to set up a new Markov chain from which we can calculate the expected number of tosses until we first observe Tails immediately preceded by Tails. The states in this new Markov chain would be {S, H, T, TT}, where TT represents Tails immediately preceded by Tails.

4. Expected number of additional tosses until Tails immediately preceded by Tails:
Once we have observed Tails immediately preceded by Tails, we need to find the expected number of additional tosses until we again observe Tails immediately preceded by Tails. To calculate this, we can find the mean recurrence time for the state TT, similar to what we did for the state HT in the first part of the question. By setting up equations and solving them, we can find the value of the mean recurrence time R(TT), which represents the expected number of additional tosses until Tails immediately preceded by Tails is observed again.