Lecture 019

Discrete-Time Infinite State

\begin{align*} \mathbb{P}: \infty \times \infty \tag{transition matrix}\\ \vec{\pi} = (\pi_0, \pi_1, ...), \sum_{i = 0}^\infty \pi_j = 1 \tag{limiting or stationary distribution}\\ \pi_j^{\text{limiting}} = \lim_{n \to \infty}(\mathbb{P}^n)_{ij}\\ \tag{limiting distribution}\\ \pi_j^{\text{stationary}} = \sum_{k = 0}^\infty \pi_k \cdot \mathbb{P}_{kj}\tag{stationary distribution}\\ \end{align*}

Limiting Distribution is Stationary

Theorem: for infinite-state Markov Chain, if $\vec{\pi}^{\text{limiting}}$ exists, then $\vec{\pi}^{\text{stationary}} = \vec{\pi}^{\text{limiting}}$ and no other stationary distribution exists.

Proof: $\pi_j^{\text{limiting}} \geq \sum_{k = 0}^\infty \pi_k^{\text{limiting}} \mathbb{P}_{kj}$

\begin{align*} \pi_j^{\text{limiting}} =& \lim_{n \to \infty} (\mathbb{P}^{n+1})_{ij}\\ =& \lim_{n \to \infty} \sum_{k = 0}^\infty \left((\mathbb{P}^n)_{ik} \mathbb{P}_{kj}\right)\\ \geq& \lim_{n \to M} \sum_{k = 0}^M \left((\mathbb{P}^n)_{ik} \mathbb{P}_{kj}\right) \tag{$\forall M \in \mathbb{N}$}\\ =& \sum_{k = 0}^M \lim_{n \to \infty}(\mathbb{P}^n)_{ik}\mathbb{P}_{kj} \tag{$\forall M \in \mathbb{N}$}\\ =& \sum_{k = 0}^M \pi_k^{\text{limiting}} \mathbb{P}_{kj} \tag{$\forall M \in \mathbb{N}$}\\ \geq& \sum_{k = 0}^\infty \pi_k^{\text{limiting}} \mathbb{P}_{kj}\\ \\ \end{align*}

Proof: $\pi_j^{\text{limiting}} \leq \sum_{k = 0}^\infty \pi_k^{\text{limiting}} \mathbb{P}_{kj}$ by contradiction.

\begin{align*} \sum_{j = 0}^\infty \pi_j >& \sum_{j = 0}^\infty \left( \sum_{k = 0}^\infty \pi_k^{\text{limiting}} \mathbb{P}_{kj}\right) \tag{AFSOC $\exists \pi_l > \sum_{k = 0}^\infty \pi_k \mathbb{P}_{kl}$}\\ =& \sum_{k = 0}^\infty \left( \sum_{j = 0}^\infty \pi_k^{\text{limiting}} \mathbb{P}_{kj}\right) \tag{can swap for all finite sum, infinite requires all summands $\geq 0$}\\ =& \sum_{k = 0}^\infty \left( \pi_k^{\text{limiting}} \sum_{j = 0}^\infty \mathbb{P}_{kj}\right)\\ =& \sum_{k = 0}^\infty \left( \pi_k^{\text{limiting}} \cdot 1\right)\\ =& 1 \tag{$\Rightarrow\Leftarrow$}\\ \end{align*}

To see how you cannot swap infinite sums: consider a matrix of the following

M = \begin{bmatrix}1 & 1 & 1 & ... \\-1 & -1 & -1 & ... \\1 & 1 & 1 & ... \\-1 & -1 & -1 & ... \\ ... & ... & ... & ...\end{bmatrix}

Therefore we have

\begin{align*} \sum_{i = 0}^\infty \sum_{j = 0}^\infty M_{ij} = \sum_{i = 0}^\infty 0 =& 0\\ \sum_{j = 0}^\infty \sum_{i = 0}^\infty M_{ij} = \sum_{j = 0}^\infty \infty - \infty =& \text{Undefined}\\ \end{align*}

Proof: $(\forall \vec{\pi}^{\text{stationary}})(\vec{\pi}^{\text{stationary}} = \vec{\pi}^{\text{limiting}})$

\begin{alignat*}{3} \pi_j^{\text{stationary}} =& \sum_{i = 0}^\infty \pi_i^{\text{stationary}}(\mathbb{P}^n)_{ij} &&= \sum_{i = 0}^M \pi_i^{\text{stationary}}(\mathbb{P}^n)_{ij} + \sum_{i = M + 1}^\infty \pi_i^{\text{stationary}}(\mathbb{P}^n)_{ij} \tag{$\forall M \in \mathbb{N}$}\\ \sum_{i = 0}^M \pi_i^{\text{stationary}}(\mathbb{P}^n)_{ij} \leq& \pi_j^{\text{stationary}} &&\leq \sum_{i = 0}^M \pi_i^\text{stationary}(\mathbb{P}^n)_{ij} + \sum_{i = M + 1}^\infty \pi_i^{\text{stationary}}\\ \lim_{n \to \infty}\sum_{i = 0}^M \pi_i^{\text{stationary}}(\mathbb{P}^n)_{ij} \leq& \lim_{n \to \infty} \pi_j^{\text{stationary}} &&\leq \lim_{n \to \infty} \sum_{i = 0}^M \pi_i^\text{stationary}(\mathbb{P}^n)_{ij} + \lim_{n \to \infty} \sum_{i = M + 1}^\infty \pi_i^{\text{stationary}}\\ \sum_{i = 0}^M \pi_i^{\text{stationary}}\pi_j^{\text{limiting}} \leq& \pi_j^{\text{stationary}} &&\leq \sum_{i = 0}^M \pi_i^{\text{stationary}}\pi_j^{\text{limiting}} + \sum_{i = M + 1}^\infty \pi_i^{\text{stationary}}\\ \pi_j^{\text{limiting}} \sum_{i = 0}^M \pi_i^{\text{stationary}} \leq& \pi_j^{\text{stationary}} &&\leq \pi_j^{\text{limiting}} \sum_{i = 0}^M \pi_i^{\text{stationary}} + \sum_{i = M + 1}^\infty \pi_i^{\text{stationary}}\\ \lim_{M \to \infty} \pi_j^{\text{limiting}} \sum_{i = 0}^M \pi_i^{\text{stationary}} \leq& \lim_{M \to \infty} \pi_j^{\text{stationary}} &&\leq \lim_{M \to \infty} \pi_j^{\text{limiting}} \sum_{i = 0}^M \pi_i^{\text{stationary}} + \lim_{M \to \infty} \sum_{i = M + 1}^\infty \pi_i^{\text{stationary}}\\ \pi_j^\text{limiting} \leq& \pi_j^\text{stationary} &&\leq \pi_j^\text{limiting}\\ \pi_j^\text{limiting} =& \pi_j^\text{stationary}\\ \end{alignat*}

Solving Stationary Distribution using Time-Reversibility Equations for Queueing

Consider following Markov Chain that resemble server's package queueing, where each state number represent the number of packages in the queue waiting for the server to process.

\mathbb{P} = \begin{bmatrix} 1 - r & r & 0 & 0 & ...\\ s & 1 - r - s & r & 0 & ...\\ 0 & s & 1 - r - s & r & ...\\ 0 & 0 & s & 1 - r - s & ...\\ ... & ... & ... & ... & ...\\ \end{bmatrix}

The stationary equation is complicated, observe te chain is time-reversible, we can write:

\begin{cases} \pi_0 r = \pi_1 s\\ \pi_1 r = \pi_2 s\\ ...\\ \pi_1 + \pi_2 + .. = 1\\ \end{cases} \implies \begin{cases} \pi_1 = \left(\frac{r}{s}\right) \pi_0\\ \pi_2 = \left(\frac{r}{s}\right) \pi_1 = \left(\frac{r}{s}\right)^2 \pi_0\\ \pi_3 = \left(\frac{r}{s}\right) \pi_2 = \left(\frac{r}{s}\right)^3 \pi_0\\ \end{cases} \implies \pi_i = \left(\frac{r}{s}\right)^i \pi_0 \tag{by guessing a solution}

We check out guess as following

\begin{align*} \pi_i =& \pi_{i - 1}r + \pi_i (1 - r - s) + \pi_{i + 1}s \tag{stationary equation}\\ \left(\frac{r}{s}\right)^i \pi_0 =& \left(\frac{r}{s}\right)^{i - 1}\pi_0 r + \left(\frac{r}{s}\right)^{i}\pi_0 (1 - r - s) + \left(\frac{r}{s}\right)^{i + 1}\pi_0 s \tag{plug in for $\pi_i, \pi_{i - 1}, \pi_{i + 1}$}\\ \end{align*}

To calculate $\pi_0$, we use $\sum_{i = 0}^\infty \pi_i = 1$, once we get $\pi_0$, we know $\pi_i$ for all $i$.

\begin{align*} \pi_0 \sum_{i = 0}^\infty \left(\frac{r}{s}\right)^i =& 1\\ \pi_0 \left(\frac{1}{1 - \frac{r}{s}}\right) =& 1 \tag{by Geometric series}\\ \pi_0 =& 1 - \frac{r}{s}\\ \pi_i =& \left(\frac{r}{s}\right)^i(1 - \frac{r}{s})\\ \end{align*}

We can also calculate the expected number of package in the system: this is useful for figure out whether a chain is transient and whether we have stationary distribution.

\begin{align*} E[\text{number of package}] =& \sum_{i = 0}^\infty i \pi_i\\ =& \sum_{i = 1}^\infty \left(i \left(\frac{r}{s}\right)^i\left(1 - \frac{r}{s}\right)\right)\\ =& \left(1 - \frac{r}{s}\right) \cdot \frac{r}{s} \sum_{i = 1}^\infty \left(i\left(\frac{r}{s}\right)^{i - 1}\right)\\ =& \left(1 - \frac{r}{s}\right) \cdot \frac{r}{s} (1 + 2\left(\frac{r}{s}\right) + 3\left(\frac{r}{s}\right)^2 + ...)\\ =& (1 - \frac{r}{s})\cdot \frac{r}{s} \cdot \frac{1}{(1 - \left(\frac{r}{s}\right))^2}\\ =& \frac{\frac{r}{s}}{1 - \frac{r}{s}}\\ =& \frac{r}{s-r}\\ \end{align*}

This value will not make sense of transient or null recurrent.

For more complicated chains that is not time-reversible, the stationary equations are hard to solve with infinite-states. It might involve using z-transform.

Ergodicity

\begin{align*} E[T_{n+1, n}] =& 1 + 0 \cdot q + E[T_{n+2, n}] \cdot p\\ =& 1 + E[T_{n+2, n}] \cdot p\\ =& 1 + 2E[T_{n+1, n}] \cdot p\\ =& \frac{1}{1 - 2p}\\ \end{align*}

Expected Time Visit State

$f_i$: Let $f_i$ denote $Pr\{\text{a stochastic process from } j \to j \text{ will happen}\} = Pr\{\text{return to state } j | \text{current state is } j\}$.

• Therefore, the number of visit to state $j$ is $X \sim \text{Geometric}(1-f_j)$

Recurrent: State $j$ is recurrent if $f_j = 1$. A Markov Chain is recurrent if all states are recurrent.

• Given $f_j = 1$, expected number of visit is $\infty$ by $\text{Geometric}(0)$ with mean $\frac{1}{1 - 1} = \infty$.

• With probability $1$, the number of visit to a recurrent state is infinite by: we will always come back, if we run approaching infinite many time, we come back approaching infinite many times.

Transient: State $j$ is transient if $f_j < 1$. A Markov Chain is transient if all states are transient.

• Given $f_j < 1$, the expected number of visit is $C \in \mathbb{Z}$ by $\text{Geometric}(p)$ with mean $\frac{1}{1 - p} < \infty$ where $0 \leq p < 1$.

• With probability $1$, the number of visit to a transient state is finite by: if we run approaching infinite many time, an event with probability $1-f_j > 0$ will eventually happen in finite number of tries.

Formally, we can calculate expected time to visit from state $a \to b$ is:

\begin{align*} E[X] =& E[X_1] + E[X_2] + ... \tag{where $X_i$ is the number of visits $a \to b$ using $i$ steps}\\ =& 1 \cdot (\mathbb{P}^1)_{ab} + 1 \cdot (\mathbb{P}^2)_{ab} + 1 \cdot (\mathbb{P}^3)_{ab} + ...\\ =& \sum_{i = 0}^\infty (\mathbb{P}^i)_{ab}\\ \sum_{i = 0}^\infty (\mathbb{P}^i)_{ab} =& \infty \tag{if $i$ is recurrent: we travel $a \to b$ infinite many time}\\ \sum_{i = 0}^\infty (\mathbb{P}^i)_{ab} <& \infty \tag{if $i$ is transient: we travel $a \to b$ finite many time} \end{align*}

Theorem: The expected number of visit to specific arbitrary state $E[N] = \infty$ $\iff$ the chain is recurrent, and $E[N] < \infty$ $\iff$ the chain is transient.

Recurrence Class Property

Recurrence Class Property: If state $i$ is recurrent and communicates with $j$, then $j$ is recurrent.

• If state $i$ is transient and communicates with $j$, then $j$ is transient.

• A irreducible chain is either entirely transient or entirely recurrent.

• More Class Properties: the class properties also hold for positive recurrent and null recurrent.

Proof: Assume we have path $j \to i$ in $m$ steps ($(\mathbb{P}^m)_{ij} > 0$), and we have path $j \to i$ in $n$ steps ($(\mathbb{P}^n)_{ji} > 0$), and we have state $i$ being recurrent $\sum_{s = 0}^\infty (\mathbb{P}^t)_{jj} = \infty$, then state $j$ is recurrent.

\begin{align*} \sum_{t = 0}^\infty (\mathbb{P}^t)_{jj} \geq& \sum_{s = 0}^\infty (\mathbb{P}^{m + n + s})_{jj} \tag{by probability of taking one of paths of any length is greater than probability of taking paths of at least $m + n$}\\ \geq& \sum_{s = 0}^\infty (\mathbb{P}^m)_{ji}(\mathbb{P}^s)_{ii}(\mathbb{P}^n)_{ij}\\ =& (\mathbb{P}^m)_{ji} (\mathbb{P}^n)_{ij} \sum_{s = 0}^\infty(\mathbb{P}^s)_{ii}\\ \end{align*}

Assume state $j$ is recurrent, the expected number of visit back to $j$ starting from $j$ in any number of steps ($\sum_{t = 0}^\infty (\mathbb{P}^t)_{jj}$) is greater than the expected number of visit back to $j$ by traveling along a specific subset of paths $j \to i \to j \to j$, which is infinity.

Limiting/Stationary Distribution Does Not Exists

Theorem: for a transient Markov Chain, the limiting distribution does not exists.

Proof:

1. The limiting probability is zero for all states $j$: $\lim_{n \to \infty} (\mathbb{P}^n)_{ij} = 0$ by probability of being in state $j$ after $n \to \infty$ step is zero.
2. The sum of limiting probabilities is zero: $\sum_{j = 0}^\infty \pi_j = 0$ because summing countably many (because we have countably many states) zeros is still zero, thus does not satisfy stationary equations.

Theorem: for Markov Chain where the limiting probabilities are all zero, no stationary distribution exists and no limiting distribution exists.

Infinite Random Walk

Gambler's Walk: a chain of states $\mathbb{Z}$. Given current state $s$, with probability $p$ next step will be $s+1$ and with probability $1-p$ next step will be $s-1$.

Theorem: $p = \frac{1}{2} \iff \text{Gambler's Walk is recurrent}$

Proof: We calculate the expected number of visits back to state $0$ from state $0$:

\begin{align*} V =& \sum_{n = 1}^\infty (\mathbb{P}^n)_{00}\\ =& (\mathbb{P}^{2n})_{00}\\ =& \sum_{i = 1}^\infty {2n \choose n}p^n(1-p)^n \tag{in $2n$ step, we need to walk exactly $n$ left and $n$ right to go back}\\ \end{align*}

We now can upper bound and lower bound $V$ in different choice of $p$:

\begin{alignat*}{3} \frac{4^n}{2n+1} <& {2n \choose n} &&< 4^n \tag{by Misha Lavrov's Lemma}\\ \sum_{n = 1}^\infty \frac{4^n}{2n+1}p^n(1-p)^n <& \sum_{n = 1}^\infty {2n \choose n}p^n(1-p)^n &&< \sum_{n = 1}^\infty 4^np^n(1-p)^n\\ & V &&< \sum_{n = 1}^\infty (4p(1-p))^n\\ & V &&< \infty \tag{if $p \neq \frac{1}{2} \implies 4p(1-p) < 1$}\\ \sum_{n = 1}^\infty \frac{4^n}{2n + 1}\frac{1}{4^n} <& V && \tag{if $p = \frac{1}{2} = 1 - p$}\\ \sum_{n = 1}^\infty \frac{1}{2n+1} <& V && \tag{if $p = \frac{1}{2} = 1 - p$}\\ \infty <& V && \tag{if $p = \frac{1}{2} = 1 - p$}\\ \end{alignat*}

Proof of Misha Lavrov's Lemma: consider

\sum_{k = 0}^2n {2n \choose k} = (1 + 1)^{2n} = 2^{2n} = 4^n

We get ${2n \choose n} < 4^n$ because ${2n \choose n}$ is one of the sum in $\sum_{k = 0}^{2n} {2n \choose k}$.

We get $\frac{4^n}{2n + 1} < {2n \choose n}$ because $\frac{4^n}{2n + 1}$ is the average of the series ${2n \choose 1}, {2n \choose 2}, ..., {2n \choose k}$ and ${2n \choose 2n}$ is the maximum of the series.

If we modify Gambler's Walk by removing negative states, then the new Markov Chain is still recurrent because the probability of coming back a state is strictly higher for all states.

Recurrence is Not Enough for Limiting Distribution

Ergodic Theorem of Markov Chains (Limiting Probability): given recurrent (either positive or null), aperiodic, irreducible DTMC, we have limiting probability $(\forall j)(\pi_j = \lim_{n \to \infty} (\mathbb{P}^n)_{ij} = \frac{1}{m_{jj}})$. (the proof is 10 pages long)

Note that is is possible $m_{jj} = \infty$. In this case, the limiting probability exists and are all zero. (For any aperiodic chain, there is always limiting probability.) But the limiting distribution might not exist because the sum of countably many zeros does not add up to one.

Ergodic Definition (Limiting Distribution): an ergodic DTMC is aperiodic, irreducible, and positive recurrent. For an ergodic DTMC, the limiting distribution exists.

Limiting distribution represent the distribution I will end up with, but if we are in a connected chain, we gotta go somewhere right? Which means the limiting distribution can never be zero. This argument is false because:

\sum_{j = 0}^\infty \lim_{n \to \infty} (\mathbb{P}^n)_{ij} \neq \lim_{n \to \infty} \sum_{j = 0}^\infty (\mathbb{P}^n)_{ij}

Theorem: for ergodic DTMC, limiting distribution exists (w.p. $1$) by limiting distribution exists and positive and sum to one:

\begin{align*} &\sum_{j = 0}^\infty \pi_j\\ =& \sum_{j = 0}^\infty \frac{1}{m_{jj}} \tag{by $m_{jj}$ is finite and $\pi_j$ exists}\\ =& \sum_{j = 0}^\infty p_j \tag{w.p. 1, by SLLN, since $m_{jj}$ finite}\\ =& 1 \tag{since sum of fraction of time in each state is $1$}\\ \end{align*}

Mean Time Between Visits

We can show $m_{jj} = \infty$ for null-recurrent by doing something like below

\begin{align*} m_{10} =& 1 + \frac{1}{2} \cdot 0 + \frac{1}{2} m_{20}\\ =& 1 + \frac{1}{2}(m_{21} + m_{10})\\ =& 1 + \frac{1}{2} \cdot 2m_{10} \tag{by infinite chain}\\ =& 1 + m_{10}\\ m_{10} =& \infty\\ \end{align*}

Positive Recurrent: $f_j = 1 \land E[T_{jj}] = m_{jj} < \infty$

Null Recurrent: $f_j = 1 \land E[T_{jj}] = m_{jj} = \infty$

Null recurrent is not intuitive because the expected time between state is infinite, yet it still visit the state.

Summary

There are 3 reasons for why limiting probability not exist:

1. is not a distribution: does not sum to one (only in infinite-state DTMC)
2. limit depending on period: limit not converge
3. limiting matrix rows are different: initial state dependent
Finite
PeriodicityAperiodicPeriodic
ReducibilityReducibleIrreducibleReducibleIrreducible
ConnectivityNot ConnectedConnectedConnectedNot ConnectedConnectedConnected
Sinkivity/Multiple SinkOne SinkNo Sink/Multiple SinkOne SinkNo Sink
Recurency/Sink RecurrentSink RecurrentPositive Recurrent/Sink RecurrentSink RecurrentPositive Recurrent
Ergodic/NoNoYes/NoNoNo
Limiting Distribution/Not ExistsSame LimitSame Limit/Not ExistsNot ExistsNot Exists
Stationary Distribution/MultipleSame LimitSame Limit/MultipleOneOne
pi_j/> 0> 0> 0/> 0> 0> 0
p_j//w.p. 1 Limitw.p. 1 Limit//w.p. 1 Stationw.p. 1 Station
m_{jj}///< ∞///< ∞
1/m_{jj}///Same Limit///Same Station
f_j///1///1
E[visit]//////
Infinite
PeriodicityAperiodicPeriodic
ReducibilityReducibleIrreducibleReducibleIrreducible
ConnectivityNot ConnectedConnectedConnectedNot ConnectedConnectedConnected
Sinkivity/Multiple SinkOne SinkNo Sink/Multiple SinkOne SinkNo Sink
Recurency///Positive RecurrentNull RecurrentTransient///Positive RecurrentNull RecurrentTransient
Ergodic///YesNoNo///NoNoNo
Limiting Distribution///Same LimitNot ExistsNot Exists///Not ExistsNot ExistsNot Exists
Stationary Distribution///Same LimitNot ExistsNot Exists///OneNot ExistsNot Exists
pi_j///> 000///> 000
p_j///w.p. 1 Limit00///w.p. 1 Station00
m_{jj}///< ∞///< ∞
1/m_{jj}///Same Limit00///Same Station00
f_j///11< 1///11< 1
E[visit]///< ∞///< ∞

// WARNING: m_{10} = m_{21} = ... (this is useful in solving m_{00} by conditioning on last step) Conditioning require every $m_{00}$ to be non-infinity, which requires markov chain not recurrent (since infinity can be one of the solution)

// WARNING: solving using by argue one must not travel the same transition without back. We got a stationary distribution by solving time-reversible equation, but you need to argue the stationary distribution is unique by stateing aperiodic and irreducible.

Theorem: If you have a stationary distribution, and the chain is aperiodic (limit exists) and irreducible (stationary unique), then the stationary distribution is limiting distribution.

// QUESTION: irreducible itself guarantee the stationary distribution is unique?

Theorem: if a (non-unique) stationary distribution exists in infinite DTMC, chain must be recurrent. // QUESTION: is this true?

Theorem: If there is a unique stationary distribution, it is also the long-run fraction of time. (unproved)

Table of Content