Theorem: for infinite-state Markov Chain, if \vec{\pi}^{\text{limiting}} exists, then \vec{\pi}^{\text{stationary}} = \vec{\pi}^{\text{limiting}} and no other stationary distribution exists.
Proof: \pi_j^{\text{limiting}} \geq \sum_{k = 0}^\infty \pi_k^{\text{limiting}} \mathbb{P}_{kj}
Proof: \pi_j^{\text{limiting}} \leq \sum_{k = 0}^\infty \pi_k^{\text{limiting}} \mathbb{P}_{kj} by contradiction.
To see how you cannot swap infinite sums: consider a matrix of the following
M = \begin{bmatrix}1 & 1 & 1 & ... \\-1 & -1 & -1 & ... \\1 & 1 & 1 & ... \\-1 & -1 & -1 & ... \\ ... & ... & ... & ...\end{bmatrix}Therefore we have
\begin{align*} \sum_{i = 0}^\infty \sum_{j = 0}^\infty M_{ij} = \sum_{i = 0}^\infty 0 =& 0\\ \sum_{j = 0}^\infty \sum_{i = 0}^\infty M_{ij} = \sum_{j = 0}^\infty \infty - \infty =& \text{Undefined}\\ \end{align*}
Proof: (\forall \vec{\pi}^{\text{stationary}})(\vec{\pi}^{\text{stationary}} = \vec{\pi}^{\text{limiting}})
Consider following Markov Chain that resemble server's package queueing, where each state number represent the number of packages in the queue waiting for the server to process.
The stationary equation is complicated, observe te chain is time-reversible, we can write:
We check out guess as following
\begin{align*} \pi_i =& \pi_{i - 1}r + \pi_i (1 - r - s) + \pi_{i + 1}s \tag{stationary equation}\\ \left(\frac{r}{s}\right)^i \pi_0 =& \left(\frac{r}{s}\right)^{i - 1}\pi_0 r + \left(\frac{r}{s}\right)^{i}\pi_0 (1 - r - s) + \left(\frac{r}{s}\right)^{i + 1}\pi_0 s \tag{plug in for $\pi_i, \pi_{i - 1}, \pi_{i + 1}$}\\ \end{align*}
To calculate \pi_0, we use \sum_{i = 0}^\infty \pi_i = 1, once we get \pi_0, we know \pi_i for all i.
We can also calculate the expected number of package in the system: this is useful for figure out whether a chain is transient and whether we have stationary distribution.
This value will not make sense of transient or null recurrent.
For more complicated chains that is not time-reversible, the stationary equations are hard to solve with infinite-states. It might involve using z-transform.
f_i: Let f_i denote Pr\{\text{a stochastic process from } j \to j \text{ will happen}\} = Pr\{\text{return to state } j | \text{current state is } j\}.
Recurrent: State j is recurrent if f_j = 1. A Markov Chain is recurrent if all states are recurrent.
Given f_j = 1, expected number of visit is \infty by \text{Geometric}(0) with mean \frac{1}{1 - 1} = \infty.
With probability 1, the number of visit to a recurrent state is infinite by: we will always come back, if we run approaching infinite many time, we come back approaching infinite many times.
Transient: State j is transient if f_j < 1. A Markov Chain is transient if all states are transient.
Given f_j < 1, the expected number of visit is C \in \mathbb{Z} by \text{Geometric}(p) with mean \frac{1}{1 - p} < \infty where 0 \leq p < 1.
With probability 1, the number of visit to a transient state is finite by: if we run approaching infinite many time, an event with probability 1-f_j > 0 will eventually happen in finite number of tries.
Formally, we can calculate expected time to visit from state a \to b is:
Theorem: The expected number of visit to specific arbitrary state E[N] = \infty \iff the chain is recurrent, and E[N] < \infty \iff the chain is transient.
Recurrence Class Property: If state i is recurrent and communicates with j, then j is recurrent.
If state i is transient and communicates with j, then j is transient.
A irreducible chain is either entirely transient or entirely recurrent.
More Class Properties: the class properties also hold for positive recurrent and null recurrent.
Proof: Assume we have path j \to i in m steps ((\mathbb{P}^m)_{ij} > 0), and we have path j \to i in n steps ((\mathbb{P}^n)_{ji} > 0), and we have state i being recurrent \sum_{s = 0}^\infty (\mathbb{P}^t)_{jj} = \infty, then state j is recurrent.
Assume state j is recurrent, the expected number of visit back to j starting from j in any number of steps (\sum_{t = 0}^\infty (\mathbb{P}^t)_{jj}) is greater than the expected number of visit back to j by traveling along a specific subset of paths j \to i \to j \to j, which is infinity.
Theorem: for a transient Markov Chain, the limiting distribution does not exists.
Proof:
Theorem: for Markov Chain where the limiting probabilities are all zero, no stationary distribution exists and no limiting distribution exists.
Gambler's Walk: a chain of states \mathbb{Z}. Given current state s, with probability p next step will be s+1 and with probability 1-p next step will be s-1.
Theorem: p = \frac{1}{2} \iff \text{Gambler's Walk is recurrent}
Proof: We calculate the expected number of visits back to state 0 from state 0:
We now can upper bound and lower bound V in different choice of p:
Proof of Misha Lavrov's Lemma: consider
\sum_{k = 0}^2n {2n \choose k} = (1 + 1)^{2n} = 2^{2n} = 4^nWe get {2n \choose n} < 4^n because {2n \choose n} is one of the sum in \sum_{k = 0}^{2n} {2n \choose k}.
We get \frac{4^n}{2n + 1} < {2n \choose n} because \frac{4^n}{2n + 1} is the average of the series {2n \choose 1}, {2n \choose 2}, ..., {2n \choose k} and {2n \choose 2n} is the maximum of the series.
If we modify Gambler's Walk by removing negative states, then the new Markov Chain is still recurrent because the probability of coming back a state is strictly higher for all states.
Ergodic Theorem of Markov Chains (Limiting Probability): given recurrent (either positive or null), aperiodic, irreducible DTMC, we have limiting probability (\forall j)(\pi_j = \lim_{n \to \infty} (\mathbb{P}^n)_{ij} = \frac{1}{m_{jj}}). (the proof is 10 pages long)
Note that is is possible m_{jj} = \infty. In this case, the limiting probability exists and are all zero. (For any aperiodic chain, there is always limiting probability.) But the limiting distribution might not exist because the sum of countably many zeros does not add up to one.
Ergodic Definition (Limiting Distribution): an ergodic DTMC is aperiodic, irreducible, and positive recurrent. For an ergodic DTMC, the limiting distribution exists.
Limiting distribution represent the distribution I will end up with, but if we are in a connected chain, we gotta go somewhere right? Which means the limiting distribution can never be zero. This argument is false because:
Theorem: for ergodic DTMC, limiting distribution exists (w.p. 1) by limiting distribution exists and positive and sum to one:
We can show m_{jj} = \infty for null-recurrent by doing something like below
Positive Recurrent: f_j = 1 \land E[T_{jj}] = m_{jj} < \infty
Null Recurrent: f_j = 1 \land E[T_{jj}] = m_{jj} = \infty
Null recurrent is not intuitive because the expected time between state is infinite, yet it still visit the state.
There are 3 reasons for why limiting probability not exist:
Finite | ||||||||
---|---|---|---|---|---|---|---|---|
Periodicity | Aperiodic | Periodic | ||||||
Reducibility | Reducible | Irreducible | Reducible | Irreducible | ||||
Connectivity | Not Connected | Connected | Connected | Not Connected | Connected | Connected | ||
Sinkivity | / | Multiple Sink | One Sink | No Sink | / | Multiple Sink | One Sink | No Sink |
Recurency | / | Sink Recurrent | Sink Recurrent | Positive Recurrent | / | Sink Recurrent | Sink Recurrent | Positive Recurrent |
Ergodic | / | No | No | Yes | / | No | No | No |
Limiting Distribution | / | Not Exists | Same Limit | Same Limit | / | Not Exists | Not Exists | Not Exists |
Stationary Distribution | / | Multiple | Same Limit | Same Limit | / | Multiple | One | One |
pi_j | / | > 0 | > 0 | > 0 | / | > 0 | > 0 | > 0 |
p_j | / | / | w.p. 1 Limit | w.p. 1 Limit | / | / | w.p. 1 Station | w.p. 1 Station |
m_{jj} | / | / | / | < ∞ | / | / | / | < ∞ |
1/m_{jj} | / | / | / | Same Limit | / | / | / | Same Station |
f_j | / | / | / | 1 | / | / | / | 1 |
E[visit] | / | / | / | ∞ | / | / | / | ∞ |
Infinite | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Periodicity | Aperiodic | Periodic | ||||||||||
Reducibility | Reducible | Irreducible | Reducible | Irreducible | ||||||||
Connectivity | Not Connected | Connected | Connected | Not Connected | Connected | Connected | ||||||
Sinkivity | / | Multiple Sink | One Sink | No Sink | / | Multiple Sink | One Sink | No Sink | ||||
Recurency | / | / | / | Positive Recurrent | Null Recurrent | Transient | / | / | / | Positive Recurrent | Null Recurrent | Transient |
Ergodic | / | / | / | Yes | No | No | / | / | / | No | No | No |
Limiting Distribution | / | / | / | Same Limit | Not Exists | Not Exists | / | / | / | Not Exists | Not Exists | Not Exists |
Stationary Distribution | / | / | / | Same Limit | Not Exists | Not Exists | / | / | / | One | Not Exists | Not Exists |
pi_j | / | / | / | > 0 | 0 | 0 | / | / | / | > 0 | 0 | 0 |
p_j | / | / | / | w.p. 1 Limit | 0 | 0 | / | / | / | w.p. 1 Station | 0 | 0 |
m_{jj} | / | / | / | < ∞ | ∞ | ∞ | / | / | / | < ∞ | ∞ | ∞ |
1/m_{jj} | / | / | / | Same Limit | 0 | 0 | / | / | / | Same Station | 0 | 0 |
f_j | / | / | / | 1 | 1 | < 1 | / | / | / | 1 | 1 | < 1 |
E[visit] | / | / | / | ∞ | ∞ | < ∞ | / | / | / | ∞ | ∞ | < ∞ |
// WARNING: m_{10} = m_{21} = ... (this is useful in solving m_{00} by conditioning on last step) Conditioning require every m_{00} to be non-infinity, which requires markov chain not recurrent (since infinity can be one of the solution)
// WARNING: solving using by argue one must not travel the same transition without back. We got a stationary distribution by solving time-reversible equation, but you need to argue the stationary distribution is unique by stateing aperiodic and irreducible.
Theorem: If you have a stationary distribution, and the chain is aperiodic (limit exists) and irreducible (stationary unique), then the stationary distribution is limiting distribution.
// QUESTION: irreducible itself guarantee the stationary distribution is unique?
Theorem: if a (non-unique) stationary distribution exists in infinite DTMC, chain must be recurrent. // QUESTION: is this true?
Theorem: If there is a unique stationary distribution, it is also the long-run fraction of time. (unproved)
Table of Content