Consider a Markov chain X0,X1,X2,… described by the transition probability graph shown below. The chain starts at state 1; that is, X0=1.

Find the probability that X2=3.

P(X2=3)= - unanswered

Find the probability that the process is in state 3 immediately after the second change of state. (A “change of state" is a transition that is not a self-transition.)

- unanswered

Find (approximately) P(X1000=2∣X1000=X1001).

P(X1000=2∣X1000=X1001)≈ - unanswered
Let T be the first time that the state is equal to 3.

E[T]= - unanswered

Suppose for this part of the problem that the process starts instead at state 2, i.e., X0=2. Let S be the first time by which both states 1 and 3 have been visited.

E[S]

I am in need of the above soln if anybody knew solution please give answers...

you did not provide transition probability graph/matrix when you first posted the problem

I posted the matrix in my post. I also posted the answers I got wrong when I tried.

I am not sure and I cannot check it until i am sure but for 1 I have 3/32 or 1/4 * 3/8.

for 2 I have 1/2 but really not sure
3 could be 1/3
4 should be 20/3 which the sum of 1/1/p of 1/4 + 3/8
5 ???

Can you please answer?? thanx

I only have one answer left.

@gugget

Only (2) is correct
which is = 1/2

1)3/32 correct

2)0.5 correct
3)1/3 incorrect
4)20/3 incorrect
5

Thanks

To find the probability that X2=3, we need to find the probability of transitioning from state 1 to state 3 in 2 steps.

1. Start at state 1 (X0=1).
2. Look at the transition probability graph to see if there is a direct path from state 1 to state 3.
- In this case, there is a path from state 1 to state 3 with probability 0.2.
3. Look at the possible transitions from state 1 in one step.
- There is a transition from state 1 to state 1 with probability 0.4 and from state 1 to state 2 with probability 0.6.
4. For each possible transition from state 1, check if there is a direct transition from the next state to state 3 in one step.
- From state 1 to state 1, there is no direct transition to state 3.
- From state 1 to state 2, there is no direct transition to state 3.
5. Repeat steps 3 and 4 for the next states until you reach step 2.
- For state 2, there is a direct transition from state 2 to state 3 with probability 0.9.
6. Multiply the probabilities of the transitions along the path.
- P(X2=3) = 0.6 * 0.9 = 0.54

To find the probability that the process is in state 3 immediately after the second change of state, we need to find the probability of transitioning from any state to state 3 in two steps, excluding self-transitions.

1. Start at any state (X0=n, where n is the starting state).
2. Look at the possible transitions from the starting state in one step.
3. For each possible transition, check if there is a direct transition from the next state to state 3 in one step, excluding self-transitions.
4. Multiply the probabilities of the transitions along the path.
- Repeat steps 2 and 3 until you reach the second change of state.
5. After the second change of state, check if there is a direct transition to state 3 in one step.
6. The probability of the process being in state 3 immediately after the second change of state is the product of the probabilities calculated in step 4 and the probability from step 5.

To find (approximately) P(X1000=2∣X1000=X1001), we need to find the conditional probability of being in state 2 after 1000 steps given that X1000 = X1001.

1. Start at any state (X0=n, where n is the starting state).
2. Look at the possible transitions from the starting state in one step.
3. Calculate the probability of transitioning to state 2 after 1000 steps.
4. Calculate the probability of transitioning to any state after 1001 steps.
5. Calculate the conditional probability using the formula P(X1000=2∣X1000=X1001) = P(X1000=2 and X1000=X1001) / P(X1000=X1001).

To find E[T], the expected value of the first time the state is equal to 3, we need to calculate the expected number of steps until the first occurrence of state 3.

1. Start at any state (X0=n, where n is the starting state).
2. Look at the possible transitions from the starting state in one step.
3. For each possible transition, calculate the expected number of additional steps until reaching state 3.
4. The expected value of T is the sum of the probabilities of each possible transition multiplied by the expected number of additional steps calculated in step 3.

Finally, to find E[S], the expected value of the first time both states 1 and 3 have been visited (starting at state 2), we need to calculate the expected number of steps until both states have been visited.

1. Start at state 2 (X0=2).
2. Look at the possible transitions from state 2 in one step.
3. For each possible transition, calculate the expected number of additional steps until both states 1 and 3 have been visited.
4. The expected value of S is the sum of the probabilities of each possible transition multiplied by the expected number of additional steps calculated in step 3.