Lecture 001

Definitions

Sample Space (\Omega): set of all possible outcome

Mutually Exclusive (joint): two events are mutually exclusive if E_1 \cap E_2 = \emptyset

klzzwxh:0002 are mutually exclusive, but not independent

E_1, E_2 are mutually exclusive, but not independent

Partition: events E_1, ..., E_n partition set F iff E_1, ..., E_n forms a partition of F

Event (E): a subset of sample space \Omega. An event can either happen or not happen (E and \cap{E} are mutually exclusive)

Pr\{E\} the probability outcome of experiment is in set E

  1. Non-negative: (\forall E)(Pr\{E\} \geq 0)
  2. Additive: if E_1, ..., E_n mutually exclusive, then Pr\{E_1 \cup ... \cup E_n\} = Pr\{E_1\} + ... + Pr\{E_n\}
  3. Normalization: Pr\{\Omega\} = 1

Note: events E_1, ..., E_n should be countable many

Calculation

Intersecting Event: Pr\{E \cup F\} = Pr\{E\} + Pr\{F\} - Pr\{E \cap F\}

Union Bound: Pr\{E \cup F\} \leq Pr\{E\} + Pr\{F\}

Union Bound (and inverting probability, of \forall, \exists quantifier) is very useful when you have \max(\cdot) or \min(\cdot)

Conditional Probability of event E given F: assume Pr\{F\} > 0, Pr\{E | F\} = \frac{Pr\{E \cap F\}}{Pr\{F\}}

Chainrule: Pr\{\cup_{i = 1}^n E_i\} = Pr\{E_1\} + Pr\{E_2 | E_1\} + Pr\{E_3 | E_1 \cup E_2\} ... Pr\{E_n | E_1 \cup E_2, ..., \cup E_{n-1}\} Pr\{E \cap F\} = Pr\{F\} \cdot Pr\{E | F\}: think we are in F first, and then we choose Pr\{E | F\}

Independent Event: E \perp F two definitions are the same

Proof: E \perp G \implies E \perp \bar{G}

\begin{align*} Pr\{E \cap \bar{G}\} & = Pr\{E - (E \cap G)\} \\ & = Pr\{E\} - Pr\{E \cap G\} \\ & = Pr\{E\} - Pr\{E\} \cdot Pr\{G\} \\ & = Pr\{E\} (1 - Pr\{G\}) \\ & = Pr\{E\} \cdot Pr\{\bar{G}\} \\ \end{align*}

Multiple Independent Event:

Pairwise Independence: (\forall i, j)(Pr\{A_i \cap A_j\} = Pr\{A_i\} \cdot Pr\{A_j\})

Conditional Independent: Pr\{E \cap F | G\} \neq Pr\{E | G\} \cdot Pr\{F | G\}

Law of Total Probability: E = (E \cap F) \cup (E \cap \bar{F}) (Pr\{E\} = Pr\{E | F\} \cdot Pr\{F\} + Pr\{E | \bar{F}\} \cdot Pr\{\bar{F}\})

General Law of Total Probability: for F_1, F_2, ..., F_n partition the whole space \Omega, then

\begin{align*} Pr\{E\} &= \sum_{i = 1}^n Pr\{E \cup F_i\}\\ &= \sum_{i = 1}^n Pr\{E | F_i\} \cdot Pr\{F_i\}\\ &= Pr\{E | F_1\} \cdot Pr\{F_1\} + Pr\{E | F_2\} \cdot Pr\{F_2\}+ ... + Pr\{E | F_n\} \cdot Pr\{F_n\}\\ \end{align*}

Conditional Law of Total Probability: for F_1, F_2, ..., F_n partition the whole space \Omega, then

\begin{align*} Pr\{E | G\} =& \sum_{i = 1}^n Pr\{E | F_i \cap G\} \cdot Pr\{F_i | G\}\\ =& Pr\{E | F_1 \cap G\} \cdot Pr\{F_1 | G\} + Pr\{E | F_2 \cap G\} \cdot Pr\{F_2 | G\}+ ... + Pr\{E | F_n \cap G\} \cdot Pr\{F_n | G\}\\ \end{align*}

Bayes Law:

Pr\{F | E\} = \frac{Pr\{F \cap E\}}{Pr\{E\}} = \frac{Pr\{F\} \cdot Pr\{E | F\}}{Pr\{E\}} = \frac{Pr\{F\} \cdot Pr\{E | F\}}{Pr\{E | F\} \cdot Pr\{F\} + Pr\{E | \bar{F}\} \cdot Pr\{\bar{F}\}}

Extended Bayes Law: Suppose F_1, F_2, ..., F_n partition \Omega, then Pr\{F | E\} = \frac{Pr\{F\} \cdot Pr\{E | F\}}{\sum_{j = 1}^n Pr\{E | F_j\} \cdot Pr\{F_j\}}

Random Variable: a mapping from experiment outcome to number we care about

Example:

Bertrand paradox

The result of probability calculation depend on how you define "random".

Bertrand paradox: lines "randomly" sample by different random rules

Bertrand paradox: lines "randomly" sample by different random rules

Table of Content