Lecture 006

Z-Transforms

Reason for study z-transform:

  1. computing higher moments
  2. solving recurrence relation, especially in Markov chains

Generating Functions: these are equivalence

Steps:

  1. create onion
  2. differentiate
  3. plug in

Definition

z-transform of function: for a discrete function p(i) where i = 0, 1, 2, ..., the z-transform of p(i) is

G_p(z) = \sum_{i = 0}^\infty p(i)z^i

z-transform of random variable: where we assume typically that \|z\| \leq 1 and think of z as constant.

\widehat{X}(z) = G_p(z) = E[z^X]

the convergence is guaranteed when X is non-negative and \|z\| \leq 1. Note that \widehat{X} = 1 for all X.

Convergence proof: assume - 1 \leq z \leq 1, we have -1 \leq \widehat{X}(z) \leq 1 for all z.

\begin{align*} &-1 \leq z^i \leq 1\\ =& -Pr\{x = i\} \leq z^i Pr\{X = i\} \leq Pr\{X = i\}\\ =& - \sum_i Pr\{x = i\} \leq \sum_i z^i Pr\{X = i\} \leq \sum_i Pr\{X = i\}\\ =& -1 \leq \widehat{X}(z) \leq 1\\ \end{align*}

Building the Onion

Bernoulli

For X \sim \text{Bernoulli}

\widehat{X}(z) = z^0(1 - p) + z^1p = 1 - p + pz

Binomial

For X \sim \text{Binomial}

\begin{align*} \widehat{X}(z) = E[z^X] =& \sum_{t = 0}^n z^i {n \choose i} p^i (1 - p)^{n - i}\\ =& \sum_{t = 0}^n {n \choose i} (pz)^i (1 - p)^{n - i}\\ =& (zp + (1 - p))^n\\ \end{align*}

Or, using linearity of z-transform (of Bernoulli):

\widehat{X_{Binomial}}(z) = \left(\widehat{X_{Bernoulli}}(z)\right)^n = (1 - p + pz)^n

z-transform of sum of two independent binomial: W = X + Y where X \sim \text{Binomial}(n, p), Y \sim \text{Binomial}(m, p)

Geometric

For X \sim \text{Geometric}

\begin{align*} \widehat{X}(z) = E[z^X] =& \sum_{i = 1}^\infty z^i p(1 - p)^{i - 1}\\ =& zp\sum_{i = 1}^\infty (z(1 - p))^{i - 1}\\ =& zp\sum_{i = 0}^\infty (z(1 - p))^i\\ =& \frac{zp}{1 - z(1 - p)} \tag{by assume $\left\|z(1 - p)\right\| \leq 1$ for convergence}\\ \end{align*}

Poisson

For X \sim \text{Poisson}

\begin{align*} \widehat{X}(z) = E[z^X] =& \sum_{i = 0}^\infty z^i \frac{e^\lambda \lambda^i}{i!}\\ =& e^{-\lambda}\sum_{i = 0}^\infty \frac{(\lambda z)^i}{i!}\\ =& e^{-\lambda}e^{\lambda z}\\ =& e^{-\lambda(1 - z)}\\ \end{align*}

Onion Peeling

You have a onion that does converge. You can hold it in your hand. Building onion is easy, but what do you do when you try to peel an onion? You cry.

Onion Peeling: for X discrete, integer-valued, non-negative random variable with Pr\{X = i\} for i = 0, 1, 2, ...:

\begin{align*} \widehat{X}(z)|_{z = 1} =& E[1^X] = 1\\ \widehat{X}'(z)|_{z = 1} =& E[X]\\ \widehat{X}''(z)|_{z = 1} =& E[X(X - 1)]\\ \widehat{X}'''(z)|_{z = 1} =& E[X(X - 1)(X - 2)]\\ \widehat{X}''''(z)|_{z = 1} =& E[X(X - 1)(X - 2)(X - 3)]\\ \end{align*}

Note: if above moments are not defined at z = 1, one can instead consider the \lim_{z \rightarrow 1} where evaluating the limit may require using L'Hospital's rule.

Proof:

\begin{align*} \widehat{X}(z) =& p_X(0)z^0 + p_X(1)z^1 + p_X(2)z^2 + p_X(3)z^3 + p_X(4)z^4 + ...\\ \widehat{X}'(z) =& p_X(1) + 2p_X(2)z^1 + 3p_X(3)z^2 + 4p_X(4)z^3 + ...\\ \widehat{X}'(z)|_{z = 1} =& p_X(1) + 2p_X(2) + 3p_X(3) + 4p_X(4) + ...\\ \widehat{X}'(z)|_{z = 1} =& E[X]\\ \widehat{X}''(z) =& 2p_X(2) + 3\cdot 2p_X(3)z + 4\cdot 3p_X(4)z^2 + ...\\ \widehat{X}''(z)|_{z = 1} =& 2p_X(2) + 3\cdot 2p_X(3) + 4\cdot 3p_X(4) + ...\\ \widehat{X}''(z)|_{z = 1} =& E[X(X - 1)]\\ ...\\ \end{align*}

Example: Variance of Geometric

\begin{align*} \widehat{X}(z) =& \frac{zp}{1 - z(1 - p)} \tag{build onion}\\ E[X] =& \frac{d}{dz} \left(\frac{zp}{1 - z(1 - p)}\right)|_{z = 1}\\ =& \frac{p}{(1 - z(1 - p))^2}|_{z = 1}\\ =& \frac{1}{p} \tag{peeling onion 1st layer}\\ E[X^2] =& \widehat{X}''(z)|_{z = 1} + E[X]\\ =& \frac{2p(1 - p)}{(1 - z(1 - p))^3}|_{z = 1} + \frac{1}{p}\\ =& \frac{2(1 - p)}{p^2} + \frac{1}{p}\\ =& \frac{2 - p}{p^2} \tag{peeling onion 2nd layer}\\ Var(X) =& E[X^2] - (E[X])^2\\ =& \frac{1 - p}{p^2}\\ \end{align*}

An onion holds all the information about the distribution. There is an injective mapping from the set of discrete non-negative random variable to the set of z-transforms. z-transform uniquely determines the distribution. // TODO: practice question

Linearity of z-Transforms

Given X, Y discrete random variables. Then

X \perp Y \implies \widehat{X + Y}(z) = \widehat{X}(z) \cdot \widehat{Y}(z)

proof: E[z^{X + Y}] = E[z^X \cdot z^Y] = E[z^X] \cdot E[z^Y] = \widehat{X}(z) \cdot \widehat{Y}(z)

Conditioning on z-Transforms

Let X, A, B be discrete random variable with

X = \begin{cases} A & \text{with probability } p\\ B & \text{with probability } (1 - p)\\ \end{cases}

Then we have

\widehat{X}(z) = \widehat{A}(z)p + \widehat{B}(z)(1 - p)

which can be easily proved by conditioning on expectation.

Summing a Random Number of i.i.d. Random Variable

Let X_1, X_2, ... be i.i.d. discrete random variables, where X_i \sim X. Let N be a positive discrete random variable where (\forall i)(N \perp X_i). Then:

\begin{align*} S =& \sum_{i = 1}^N X_i \implies \widehat{S}(z) = \widehat{N}\left(\widehat{X}(z)\right)\\ =& E[\widehat{X}(z)^N] = E[(E[z^X])^N] = E[z^{XN}]\\ \end{align*}

Using Z-Transform to Solve Recurrence

General Formula for Febonaci Number: f_n = \frac{1}{\sqrt{5}}(\phi^n - (-\phi)^{-n}) where \phi = \frac{1 + \sqrt{5}}{2}.

The proof is too hard. Refer to the book when you can. // TODO: practice question

Table of Content