Consider a Markov chain X0,X1,X2,É described by the transition probability graph shown below. The chain starts at state 1; that is, X0=1.

1recurs p=.75
1to 2 p= .25
2to 1 p = .375
2 recurs p=.25
2 to 3 p = .375
3 to 2 p = .25
3 recurs p = .75
Find the probability that X2=3.

1) P(X2=3)= ?

2) Find the probability that the process is in state 3 immediately after the second change of state. (A Òchange of state" is a transition that is not a self-transition.) - ?

3)Find (approx.) P(X1000=2∣X1000=X1001)

P(X1000=2∣X1000=X1001)~~ ?

4) Let T be the first time that the state is equal to 3.

E[T]= ?

5) Suppose for this part of the problem that the process starts instead at state 2, i.e., X0=2. Let S be the first time by which both states 1 and 3 have been visited.

E[S]= ?

3) Interesting, what does "P(X1000=2 ∣ X1000=X1001)" mean ? I did not know you could condition on a future event. Normally I would have approached this using:

a) Bayes: P(A|B)=P(A)*P(B|A)/P(B)
b) the steady state for the 1->2 transition in 1000 steps.

Can you please answer??

2) 0.5

And the rest? Thanx

Please give me at-least one more...

1)3/32 correct

3) 0.0625

Anyone know 5)?

5) 8

5) 8 is false!

1) To find the probability that X2=3, we need to find the probability of transitioning from state 1 to state 2 and then from state 2 to state 3.

The transition from state 1 to state 2 has a probability of 0.25.
The transition from state 2 to state 3 has a probability of 0.375.

To calculate the probability, we multiply the probabilities of each transition:

P(X2=3) = P(1 to 2) * P(2 to 3)
= 0.25 * 0.375
= 0.09375

Therefore, the probability that X2=3 is approximately 0.09375.

2) To find the probability that the process is in state 3 immediately after the second change of state, we need to consider the transition from state 2 to state 3 after the first transition that is not a self-transition.

The transition from state 1 to state 2 is a self-transition, so it doesn't count as a change of state.

The transition from state 2 to state 3 has a probability of 0.375.

Therefore, the probability that the process is in state 3 immediately after the second change of state is 0.375.

3) To find the approximate probability P(X1000=2∣X1000=X1001), we need to find the probability of X1000 being equal to 2 given that X1000 is equal to X1001.

Since the given Markov chain is time-homogeneous (the transition probabilities are independent of time), we can calculate P(X1000=2∣X1000=X1001) by finding the steady-state distribution of the chain.

To find the steady-state distribution, we need to find the eigenvector corresponding to the eigenvalue 1 of the transition probability matrix.

Calculating the steady-state distribution is beyond the scope of this explanation, as it involves eigenvalue calculations. However, once the steady-state distribution is found, the probability P(X1000=2∣X1000=X1001) can be obtained by simply looking at the probability of being in state 2 in the steady state.

4) Let T be the first time that the state is equal to 3. To find E[T], the expected time to reach state 3, we need to consider the transition probabilities and the self-transition probabilities.

The transition from state 1 to state 3 is a sequence of transitions: 1 to 2, 2 to 3. Its probability is the product of the transition probabilities:

P(1 to 3) = P(1 to 2) * P(2 to 3)
= 0.25 * 0.375
= 0.09375

The transition from state 2 to state 3 is a self-transition. Its probability is 0.25.

Therefore, the expected time E[T] to reach state 3 can be calculated as:

E[T] = 1/P(1 to 3) + (1/P(2 to 3) - 1)
= 1/0.09375 + (1/0.25 - 1)
= 10.67

Therefore, E[T] is approximately 10.67.

5) Let S be the first time by which both states 1 and 3 have been visited. To find E[S], the expected time to visit both states 1 and 3, we need to consider the transition probabilities and the self-transition probabilities.

The transition from state 2 to state 1 is a sequence of transitions: 2 to 3, 3 to 2, and then 2 to 1. Its probability is the product of the transition probabilities:

P(2 to 1) = P(2 to 3) * P(3 to 2) * P(2 to 1)
= 0.375 * 0.25 * 0.25
= 0.0234375

The transition from state 2 to state 3 is a self-transition. Its probability is 0.25.
The transition from state 3 to state 2 is a self-transition. Its probability is 0.25.

Therefore, the expected time E[S] to visit both states 1 and 3 can be calculated as:

E[S] = 1/P(2 to 1) + (1/P(2 to 3) + 1/P(3 to 2) - 2)
= 1/0.0234375 + (1/0.375 + 1/0.25 - 2)
= 9.77

Therefore, E[S] is approximately 9.77.