# Review

## Matrix Transformation

Invertible: if both $A, B \in \mathbb{R}^{n \times n}$ are invertible, then $AB$ is invertible and $(AB)^{-1} = B^{-1}A^{-1}$

• inverse of matrix $\begin{pmatrix} a & b\\ c & d\\ \end{pmatrix}$ is $\frac{1}{ad-bc} \begin{pmatrix} d & -b\\ -c & a\\ \end{pmatrix}$ if $ad - bc \neq 0$

• invertible iff

• if $A \in \mathbb{R}^{n \times n}$ is invertible, then $A^k$ is invertible and $(A^k)^{-1} = (A^{-1})^k = A^{-k}$

Transpose:

• $A \text{ is invertible } \implies A^\intercal \text{ is invertible}$

• $(AB)^T = B^TA^T$

Theorem: $A \in \mathbb{R}^{n \times n}$, the following are equivalent

• $A$ is invertible

• $Ax = b$ has a unique solution for every $b \in \mathbb{R}^n$ ($A$ is Non-Singular Matrix)

• The only solution for $Ax = 0$ is $x = 0$

• Using row operation on $Ax = 0$ can reach $\begin{pmatrix} x_1 & 0 & 0\\ 0 & x_2 & 0\\ 0 & 0 & x_3\\ \end{pmatrix} = 0$

• Using row operation on $Ax = b$ can reach $\begin{pmatrix} x_1 & 0 & 0\\ 0 & x_2 & 0\\ 0 & 0 & x_3\\ \end{pmatrix} = \text{ something}$

• $A$ is a product of elementary matrices

Gauss Jordan Method:

• If the matrix is not invertible, then you will loose a pivot and find a row of zero.

Lower Triangular Matrix: $\begin{pmatrix} \circ & 0 & 0 & 0\\ x & \circ & 0 & 0\\ x & x & \circ & 0\\ x & x & x & \circ\\ \end{pmatrix}$

Upper Triangular Matrix: $\begin{pmatrix} \circ & x & x & x\\ 0 & \circ & x & x\\ 0 & 0 & \circ & x\\ 0 & 0 & 0 & \circ\\ \end{pmatrix}$

$LU$ factorization: Every invertible matrix $A$ can be written as a product of a lower and an upper triangular matrix $A = LU$

• $E_{a, b}$ is elementary matrix that uses row $b$ to modify row $a$.

• $A = LU$ ($L^{-1}A = U$) where $L^{-1} = E_0E_1E_2...E_{n-1}$ and $U$ is upper triangular

• Using $LU$ to find $x$: to solve $A \overrightarrow{x} = \overrightarrow{b}$, $\overrightarrow{a} = U^{-1}(L^{-1}\overrightarrow{b})$

• LU factorization is not unique

• To Find LU:

• When doing Gaussian Elimination, keep extended matrix started from identity matrix on the left
• $R_i = R_i - (n)R_j$: write $n$ in $I_{i, j}$
• result will be LU
• // WARNING: this process is not the same as finding $L^1$ (Gauss Jordan Method), this method find $L$ directly.
• // WARNING: to find $L^{-1}$, $I$ do the same operation with $A$ doing Gaussian Elimination
• To solve $Ax = b$, find $Ux$ in $L(Ux) = b$, then find $x$ in $Ux = (Ux)$.

## Vector Space

Subspace Theorem: If $V$ is a vector space, then $W$ is a subspace of $V$ iff

1. $W \subseteq V$
2. $W \neq \emptyset$
3. $W$ is closed under addition, and
4. $W$ is closed for scalar multiplication

// TODO: is a subspace always a vector space?

Trivial Linear Combination: all constants are zero Trivial Subspace: no space or full space

Linear independence: a set of vectors is linearly independent iff $c_1v_1 + c_2v_2 + ... + c_nv_n = 0 \implies c_1 = c_2 = ... = c_n = 0$ (or $CV = 0 \implies C = \overrightarrow{0}$)

• Let $w \in \text{span}(V)$, then $\{w\} \cup V$ is linearly dependent. (Let $w \notin \text{span}(V)$, then $\{w\} \cup V$ is linearly independent.)

• If $V$ is linearly dependent, there exists $w \in V$ such that $w \in \text{span}(V - \{w\})$

• $v_1, v_2, ..., v_n \in V$ are linearly independent iff $\forall w \in \text{span}(V)$, $w$ can be expressed as a unique linear combination of $V$.

• If $v_1, v_2, ..., v_n$ are linearly independent, but $v_1, v_2, ..., v_n, m$ are linearly dependent, then $m \in \text{span}(\{v_1, ..., v_n\})$

// TODO: prove the last one

Let $A \in \mathbb{R}^{n \times n}$, then the followings are equivalent:

• $A$ is invertible

• $N(A) = \{0\}$ (nullspace)

• column space of $A$ are linearly independent

• rows of $A$ are linearly independent

• $Ax = b$ has a unique solution for every $b \in \mathbb{R}^n$ ($A$ is Non-Singular Matrix)

• The only solution for $Ax = 0$ is $x = 0$

• Using row operation on $Ax = 0$ can reach $\begin{pmatrix} x_1 & 0 & 0\\ 0 & x_2 & 0\\ 0 & 0 & x_3\\ \end{pmatrix} = 0$

• Using row operation on $Ax = b$ can reach $\begin{pmatrix} x_1 & 0 & 0\\ 0 & x_2 & 0\\ 0 & 0 & x_3\\ \end{pmatrix} = \text{ something}$

• $A$ is a product of elementary matrices

// TODO: what is demension of a matrix?

// TODO: why orthogonal matrix transpose is its inverse

$\text{row-rank}(A) = \text{column-rank}(A)$

Gaussian Elimination: let $A \rightarrow U \rightarrow R$, then $RS(A) = RS(U) = RS(R)$ (imagine vectors spans a space)

• Swaping rows: swap the axis (resulting change the sign of determinant, column space change)

• Multiply by constant: scale an axis by constant (resulting determinant scale by constant, column space change)

• Add, Subtract: move the origin (with vector tips fixed) along a vector (resulting determinant's shape will be normalized, but volume does not, column space remain)

• Row space remains by definition of linear combination

Rank Theorems:

1. $\text{row-rank}(A) = \text{column-rank}(A) = \text{num-pivots}(\text{rref}(A))$
2. Rank–nullity theorem: $\text{dim}(N(A)) + \text{rank}(A) = \text{num-column}(A)$
3. $\text{row-rank}(\text{rref}(A)) = \text{num-pivots}(\text{rref}(A)) = \text{num-non-zero-row}(\text{rref}(A)) = \text{column-rank}(\text{rref}(A))$ (prove by columns are linear independent by definition)
4. $\text{row-rank}(A) = \text{row-rank}(\text{rref}(A))$
5. $RS(A) = RS(\text{rref}(A))$ (column space are not the same for $A$ and $rref(A)$, but the rank will not change)
6. $\text{column-rank}(A) = \text{column-rank}(\text{rref}(A))$
7. $N(A) = N(\text{rref}(A))$ ()
8. $\text{dim}(N(A)) + \text{column-rank}(\text{rref}(A)) = \text{num-column}(A)$ (prove by construct set of null space, number of free variable = number of columns - number of pivots)

(In summary, $\text{rref}$ preserve rank of all space, and $RS(\cdot), N(\cdot)$), but not $C(\cdot)$. Rank always preserve)

Corollary:

1. $Ax = 0 \iff rref(A)x = 0$
2. $(rref(A)x = 0 \implies x = 0) \implies A \text{ is linearly independent}$ // QUESTION

### Finding Basis

Find $RS(A)$: pick non-zero row of $rref(A)$ (begin with 1) Find $C(A)$: pick pivot columns of $rref(A)$ and look up in $A$

Finding a basis from scratch:

1. reduce the matrix to $ref$ form
2. record the index of columns that are standard basis vector
3. go to original matrix, the columns of recorded index are basis

Given a incomplete basis, add vector so that it becomes basis:

1. find a basis
2. for each basis, remove the projection to original basis

Finding a null space:

1. solve the equation for $x, y, z, ...$
2. make the solution $x=?, y=?, z=?, ...$ into a basis

Finding dimension of null space:

1. dimension of null space is the same after rref
2. therefore $n-\text{ independent vector in rref}$ is the dimension of null space

Linear Transformation: $T(\alpha v + \beta w) = \alpha T(v) + \beta T(w)$

• Rotation: $\begin{pmatrix} \cos \theta & -\sin \theta\\ \sin \theta & \cos \theta\\ \end{pmatrix}$

// TODO: practice find transformation given transformation in two basis

Theorem: $\text{dim}(\text{Img}(T_A)) + \text{dim}(\text{Ker}(T_A)) = n$ (where n is the dimension of starting space)

$T^{m \times n} : \mathbb{R}^n \rightarrow \mathbb{R}^m$ injective not injective
surjective rank(T) = n rank(T) = m (m<n)
not surjective rank(T) = n (m>n) rank(T) < m, n

Think the above table in the following way:

• $\text{rank}(T)$: the number of connected dots on the range of the function

• $n$: the number of total dots on the domain of the function

• $m$: the number of total dots on the range of the function

• each dot above represent a dimension

Theorem: For any matrix $A \subseteq \mathbb{R}^{m \times n}$

• $RS(A)^\perp = N(A)$ ($\text{dim}(V) + \text{dim}(V^\perp) = n$)

• $C(A)^\perp = \text{left-nullspace}(A)$

• Lemma: $N(A) \perp RS(A)$

Finding Change of basis:

1. given 2 basis $B, C$, find $\{\begin{pmatrix} 0\\ 1\\ \end{pmatrix}_B, \begin{pmatrix} 1\\ 0\\ \end{pmatrix}_B\},$'s representation in standard basis
2. convert that language to basis of $C$
3. make that a column

## Orthogonal Basis

Finding Orthogonal Complement: (finding null space)

1. Find a basis of $V$
2. Find a set of linear independent vectors $W$ that are perpendicular to $\forall v \in V$
3. $\text{span}(W) = V^\perp$

$Ax = b$ has a solution iff $y^T \cdot A = 0 \implies y^T \cdot b = 0$

Projection to plane: let $\{b_1, b_2\}$ be an orthogonal basis of the plane. $T = \frac{b_1 \cdot v}{b_1 b_1} b_1 + \frac{b_2 \cdot v}{b_2 b_2} b_2$

• in general $T = \frac{b_1 \cdot v}{b_1 b_1} b_1 + \frac{b_2 \cdot v}{b_2 b_2} b_2 + ... + \frac{b_k \cdot v}{b_k b_k} b_k$

• in general, for matrix: $P_W = P_{b_1} + P_{b_2} + ... + P_{b_k}$

Gram-schmidt orthogonal: any subset of $\mathbb{R}^n$ has an orthogonal basis.

Gram-schmidt process

1. take a basis $v$
2. add a $b$ basis into collection (containing $b_1, ..., b_n$) so that it is perpendicular to $b_1, ..., b_n$, that spans $\text{span}(\{b_1, ..., b_n, v\})$
• For the first projection: $b_2 = (v_2 - \text{proj}_{b_1}(v_2)) = v_2 - \frac{b_1 \cdot v_2}{b_1 \cdot b_1} b_1 \perp b_1$
• For later projections: $b = v - \text{proj}_{\text{span}(b_1, ..., b_n)}(v) = v - \frac{b_1 \cdot v}{b_1 \cdot b_1}b_1 - \frac{b_2 \cdot v}{b_2 \cdot b_2}b_2 - ... - \frac{b_n \cdot v}{b_n \cdot b_n}b_n$
3. goto 1 til it is done

Orthogonal Matrix: consists of orthonormal vectors

For $Q \in \mathbb{R}^{n \times n}$, the followings are equivalent

• $A \text{ is orthogonal}$

• Transpose Identity: $Q^T = Q^{-1}$

## Determinant

Properties of Determinant:

• $\det A \neq 0 \iff A \text{ is invertible}$

• $\det A = \det A^T$

• If $A$ invertible, then $\det A^{-1} = \frac{1}{\det A}$

Determinant rank: $\text{det-rank}(A) = \max(\{k | B^{k \times k} = \text{submatrix}(A) \land \det B \neq 0\})$

• Theorem: for any $A \in \mathbb{R}^{n \times n}$, $\text{row-rank}(A) = \text{column-rank}(A) = \text{determinant-rank}(A)$

• // TODO: proof

Cramer's rule: $\{\frac{\det A_i(b)}{\det A} | 1 \leq i \leq n\}$ is the unique solution to any system $Ax = b$ if $A$ is invertible // TODO: proof

• Notation: $A_i(b)$ means matrix obtained by replacing $i$-th column of matrix $A$ with column vector $b$

• $A = (a_{ij})_{1 \leq i, j \leq n}$

• $(i, j)$-cofactor of $A$ is $C_{ij} = (-1)^{i+j} \det A_{ij}$

• Adjoint Matrix: \begin{align*} \text{adj} A = &(C_{ij})_{1 \leq i, j \leq n}^T\\ = &(C_{ji})_{1 \leq i, j \leq n}\\ = &\begin{pmatrix} C_{11} & C_{21} & ... & C_{n1}\\ C_{12} & C_{22} & ... & C_{n2}\\ ... & ... & ... & ...\\ C_{1n} & C_{2n} & ... & C_{nn}\\ \end{pmatrix} \end{align*}

• Theroem: If $A \in \mathbb{n \times n}$ is invertible, then $A^{-1} = \frac{1}{\det A} \text{adj} A$ // TODO: proof

Geometric Colinear

• 2D colinear test: $(a_1, a_2), (b_1, b_2), (c_1, c_2)$ are collinear iff $\begin{vmatrix} a_1 & a_2 & 1\\ b_1 & b_2 & 1\\ c_1 & c_2 & 1\\ \end{vmatrix} = 0$

• 3D colinear test: $(a_1, a_2, a_3), (b_1, b_2, b_3), (c_1, c_2, c_3), (d_1, d_2, d_3)$ are in the same plane (coplanar) iff $\begin{vmatrix} a_1 & a_2 & a_3 & 1\\ b_1 & b_2 & b_3 & 1\\ c_1 & c_2 & c_3 & 1\\ d_1 & d_2 & d_3 & 1\\ \end{vmatrix} = 0$

• 2D line: line from $(b_1, b_2), (c_1, c_2)$ can be expressed as $\begin{vmatrix} x & y & 1\\ b_1 & b_2 & 1\\ c_1 & c_2 & 1\\ \end{vmatrix} = 0$

• 3D line: line from $(b_1, b_2, b_3), (c_1, c_2, c_3), (c_1, c_2, d_3)$ can be expressed as $\begin{vmatrix} x & y & z & 1\\ b_1 & b_2 & b_3 & 1\\ c_1 & c_2 & c_3 & 1\\ d_1 & d_2 & d_3 & 1\\ \end{vmatrix} = 0$

## Eigenvalue and Eigenvector

Eigenvalue and (non-trivial) Eigenvector: $\lambda \in \mathbb{R}$ is a eigenvalue of the matrix $A^{n \times n}$ if $(\exists v \in \mathbb{R}^n)(v \neq 0 \land Av = \lambda v)$ where $v$ is a eigenvector

• Find Eigenvector given Eigenvalue: \begin{align*} Av &= \lambda v\\ Av - \lambda v &= 0\\ Av - \lambda I v &= 0\\ (A - \lambda I) v &= 0\\ \end{align*} // QUESTION: is it a zoom?

• $v \in N(A - \lambda I) \land v \neq 0 \iff v \text{ is a non-trivial eigenvector}$

• Find Eigenvalue given Matrix: \begin{align*} &\lambda \text{ is eigenvalue of } A\\ \iff &(A - \lambda I)x = 0 \text{ has non-trivial solution} \\ \iff &(A - \lambda I) \text{ is singular}\\ \iff &\det(A - \lambda I) = 0\\ \end{align*}

• Characteristic polynomial: $\det (A - \lambda I)$ (expanding determinant = 0 leads to find root of a polynomial = 0)

Eigenspace ($E_\lambda$ correspond to $A$): nullspace of $(A - \lambda I)$, all the eigenvector correspond to $A, \lambda$ // TODO: look at it

Multiplicity:

• Algebratic Multiplicity ($\text{mult}_{a}$): Algebratic Multiplicity of $\lambda$ (as a solution to the characteristic polynomial) is its multiplicity as a root of the characteristic equation (how many non-distinct value of this solution).

• Algebratic Multiplicities: a set of non-distinct solution to characteristic polynomial

• Geometric Multiplicity ($\text{mult}_{g}$): Geometric Multiplicity of $\lambda$ is the dimension of its eigenspace. ($\text{mult}_{g}(\lambda) =_A \text{dim}(E_{\lambda}) = \text{dim}(N(A - \lambda I))$)

• Theorem: $\text{mult}_{a} \geq \text{mult}_{g}$ // QUESTION: why

• Diagonalizable: $A^{n \times n}$ is diagonalizable iff $\text{mult}_{a}(A) = \text{mult}_{g}(A)$ // QUESTION: why

• Diagonalize Matrix as a scale (backward): $S^{-1}AS = D \implies \text{the diagonal of } D \text{ are eigenvalues, and the column of } S \text{ are corresponding eigenvectors}$

• Linearly Independent Basis: Let $\{\lambda_1, \lambda_2, \lambda_n\}, \{E_1, E_2, ..., E_n\}, \{B_1, B_2, ..., B_n\}$ are distinct eigenvalues, eigenspace, and basis of eigenspace, then all vectors together from the basis are linearly independent. (all vectors in all the basis of all eigenspace are linearly independent) // QUESTION: why, what if A is a scaling matrix?

• Corollary: $\text{dim}(E_1) + \text{dim}(E_2) + ... + \text{dim}(E_k) \leq n$ (that is $\text{mult}_g(\lambda_1) + \text{mult}_g(\lambda_2) + ... + \text{mult}_g(\lambda_k) \leq n$)

// TODO: understand below Properties of Eigenvector and Eigenvalue

1. $0$ is an eigenvalue of $A$ iff $A$ is singular (determinant zero, squashing space)
2. If $\lambda_1, \lambda_2, ..., \lambda_n$ are different eigenvalues of $A \in \mathbb{R}^{n \times n}$ and $v_1, v_2, ..., v_n$ are non-trivial eigenvectors corresponding to them, then $v_1, v_2, ..., v_n$ are linearly independent // TODO: proof
3. By above, $A^{n \times n}$ can have at most $n$ different eigenvalues (by algebratic multiplicities)
4. If $\lambda$ is an eigenvalue of $A$, then $\lambda^k$ is an eigenvalue of $A^k$ (by induction)

Trace: trace of $A^{n \times n}$ is the sum of its entries on the diagonal ($\text{tr}(A) = \sum_i a_{ii}$ given $A = (a_{ij})_{1 \leq i, j, \leq n}$)

1. Assume that together with multiplicities $A^{n \times n}$ has $n$ eigenvalues, then $\text{tr}(A) = \lambda_1 + \lambda_2 + ... + \lambda_n$ and $\det A = \lambda_1 \cdot \lambda_2 \cdot ... \cdot \lambda_n$ // TODO: 3b1b proof

2. $A \simeq B \implies \text{tr}(A) = \text{tr}(B)$ (but not vise-versa)

### Diagonalization

Similar: $A^{n \times n} \simeq B^{n \times n} \iff (\exists S^{n \times n})(S^{-1}AS = B)$

• effect of transformation $A$ is similar to effect of transformation of $A$ applied in some change of basis matrix $S$.

• It is equivalence relation:

• $A \simeq A$ (reflexive)
• $A \simeq B \implies B \simeq A$ (symmetry)
• $A \simeq B \land B \simeq C \implies A \simeq C$ (transitive)

Theorem: $A^{n \times n} \simeq B^{n \times n} \implies$

1. $\det A = \det B$
2. $A \text{ is invertible} \iff B \text{ is invertible}$
3. $\text{rank}(A) = \text{rank}B$
4. $A, B$ have the same characteristic polynomial
5. $A, B$ have the same eigenvalues
6. $(\forall m \geq 0)(A^m \simeq B^m)$ // TODO: proof above

Diagonalizable Matrix: $A^{n \times n}$ is diagonalizable if $(\exists \text{diagonal matrix} D)(A \simeq D)$

• Theorem: $A$ is diagonalizable iff $A$ has $n$ independent eigenvectors. // TODO: proof

• $A$ is diagonalizable if $(\exists S, D)(S^{-1}AS = B)$

• If diagonalizable, when put in diagonal form, then eigenvalue appear in the diagonal, columns are eigenvectors

• Corollary: if $A^{n\times n}$ has $n$ real different (independent?) eigenvalues, then $A$ is diagonalizable

Finding Diagonalization:

1. find each $\lambda, E_{\lambda}$, when finding $\lambda$ pay attention to number of solutions a $\lambda$ has. ($\lambda_1, \lambda_2 = m \pm \sqrt{m^2 - p}$)
2. find basis and dimension of eigenspace
3. if for some $\lambda$, $\text{mult}_{g} \leq \text{mult}_{a}$ then not diagonalizable
4. otherwise $S = (v_1 | v_2 | ... | v_n)$ and $D = \text{eigenvalues in the diagonal}$
5. $A^k = SD^kS^{-1}$

Application: given diagonalizable $A^{n \times n}$, calculate $A^k$

1. find $\lambda_1, \lambda_2, ..., \lambda_n$
2. find $v_1, v_2, ..., v_n$
3. Put $v_1, v_2, ..., v_n$ into columns of $S$
4. find $S^{-1}$
5. calculate $D = S^{-1}AS$
6. then $A = SDS^{-1}$
7. then $A^k = SD^kS^{-1}$

// TODO: Connections between and properties of the four fundamental subspaces // TODO: kernel and image: be able to describe algebraically and geometrically, connections to the four fundamental subspaces // TODO: connections of kernel/image of a projection and the fundamental subspaces of the corresponding matrix

Table of Content