define steady state in markov chains mathmatically

In the context of Markov chains, the steady state (also referred to as the stationary distribution or equilibrium distribution) is a probability distribution that remains unchanged as time progresses. Mathematically, a steady state distribution is a vector that satisfies certain conditions given the transition matrix of the Markov chain.

Suppose we have a Markov chain with \( n \) states, and let \( P \) be its transition matrix, where \( P_{ij} \) denotes the probability of transitioning from state \( i \) to state \( j \). A row vector \( \pi = (\pi_1, \pi_2, \ldots, \pi_n) \) is said to be the steady state distribution if it satisfies the following two conditions:

1. **Equilibrium Condition:**
\[
\pi P = \pi
\]
This means that when you apply the transition matrix \( P \) to the steady state distribution \( \pi \), you get \( \pi \) itself. That is, the distribution does not change when transitions occur according to the probabilities defined by \( P \).

2. **Probability Distribution Condition:**
\[
\sum_{i=1}^{n} \pi_i = 1 \quad \text{and} \quad \pi_i \geq 0 \quad \text{for all } i
\]
This ensures that \( \pi \) is indeed a probability distribution: the probabilities are all non-negative and sum up to 1.

In matrix form, if you consider \( \pi \) to be a 1 x \( n \) matrix, the equilibrium condition can also be described as:
\[
\pi \begin{bmatrix}
P_{11} & P_{12} & \cdots & P_{1n} \\
P_{21} & P_{22} & \cdots & P_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
P_{n1} & P_{n2} & \cdots & P_{nn}
\end{bmatrix} =
\begin{bmatrix}
\pi_1 & \pi_2 & \cdots & \pi_n
\end{bmatrix}
\]

Finding the steady state distribution involves solving this system of linear equations for \( \pi \), which often requires finding the left eigenvector of \( P \) associated with the eigenvalue 1. Not all Markov chains have a steady state, and those that do may have more than one, but for an ergodic Markov chain (one that is irreducible and aperiodic) or a finite Markov chain with some additional properties, a unique steady state distribution exists.

Please note that while we often talk about reaching the "steady state" in the long run, not all Markov chains will actually converge to a steady state from every initial distribution; this depends on properties of the Markov chain such as ergodicity.