Review

Matrix Transformation

Invertible: if both A, B \in \mathbb{R}^{n \times n} are invertible, then AB is invertible and (AB)^{-1} = B^{-1}A^{-1}

Transpose:

Theorem: A \in \mathbb{R}^{n \times n}, the following are equivalent

Gauss Jordan Method:

Gauss Jordan Method

Gauss Jordan Method

Lower Triangular Matrix: \begin{pmatrix} \circ & 0 & 0 & 0\\ x & \circ & 0 & 0\\ x & x & \circ & 0\\ x & x & x & \circ\\ \end{pmatrix}

Upper Triangular Matrix: \begin{pmatrix} \circ & x & x & x\\ 0 & \circ & x & x\\ 0 & 0 & \circ & x\\ 0 & 0 & 0 & \circ\\ \end{pmatrix}

LU Factorization

LU Factorization

LU factorization: Every invertible matrix A can be written as a product of a lower and an upper triangular matrix A = LU

Vector Space

Subspace Theorem: If V is a vector space, then W is a subspace of V iff

  1. W \subseteq V
  2. W \neq \emptyset
  3. W is closed under addition, and
  4. W is closed for scalar multiplication

// TODO: is a subspace always a vector space?

Trivial Linear Combination: all constants are zero Trivial Subspace: no space or full space

Linear independence: a set of vectors is linearly independent iff c_1v_1 + c_2v_2 + ... + c_nv_n = 0 \implies c_1 = c_2 = ... = c_n = 0 (or CV = 0 \implies C = \overrightarrow{0})

// TODO: prove the last one

Let A \in \mathbb{R}^{n \times n}, then the followings are equivalent:

// TODO: what is demension of a matrix?

// TODO: why orthogonal matrix transpose is its inverse

Four Fundamental Subspace (where klzzwxh:0017 stands for range, not row space, klzzwxh:0018 is pseudo inverse of klzzwxh:0019). Notice klzzwxh:0020 is the inverse of klzzwxh:0021 (klzzwxh:0022) but keeping the stretch the same. For more info: klzzwxh:0023

Four Fundamental Subspace (where R(\cdot) stands for range, not row space, A^+ is pseudo inverse of A). Notice A^T is the inverse of A (A^{-1}) but keeping the stretch the same. For more info: Visual Explanation of Four Fundamental Subspace

\text{row-rank}(A) = \text{column-rank}(A)

Gaussian Elimination: let A \rightarrow U \rightarrow R, then RS(A) = RS(U) = RS(R) (imagine vectors spans a space)

Rank Theorems:

  1. \text{row-rank}(A) = \text{column-rank}(A) = \text{num-pivots}(\text{rref}(A))
  2. Rank–nullity theorem: \text{dim}(N(A)) + \text{rank}(A) = \text{num-column}(A)
  3. \text{row-rank}(\text{rref}(A)) = \text{num-pivots}(\text{rref}(A)) = \text{num-non-zero-row}(\text{rref}(A)) = \text{column-rank}(\text{rref}(A)) (prove by columns are linear independent by definition)
  4. \text{row-rank}(A) = \text{row-rank}(\text{rref}(A))
  5. RS(A) = RS(\text{rref}(A)) (column space are not the same for A and rref(A), but the rank will not change)
  6. \text{column-rank}(A) = \text{column-rank}(\text{rref}(A))
  7. N(A) = N(\text{rref}(A)) ()
  8. \text{dim}(N(A)) + \text{column-rank}(\text{rref}(A)) = \text{num-column}(A) (prove by construct set of null space, number of free variable = number of columns - number of pivots)

(In summary, \text{rref} preserve rank of all space, and RS(\cdot), N(\cdot)), but not C(\cdot). Rank always preserve)

Corollary:

  1. Ax = 0 \iff rref(A)x = 0
  2. (rref(A)x = 0 \implies x = 0) \implies A \text{ is linearly independent} // QUESTION

Finding Basis

Find RS(A): pick non-zero row of rref(A) (begin with 1) Find C(A): pick pivot columns of rref(A) and look up in A

Finding a basis from scratch:

  1. reduce the matrix to ref form
  2. record the index of columns that are standard basis vector
  3. go to original matrix, the columns of recorded index are basis

Given a incomplete basis, add vector so that it becomes basis:

  1. find a basis
  2. for each basis, remove the projection to original basis

Finding a null space:

  1. solve the equation for x, y, z, ...
  2. make the solution x=?, y=?, z=?, ... into a basis

Finding dimension of null space:

  1. dimension of null space is the same after rref
  2. therefore n-\text{ independent vector in rref} is the dimension of null space

Linear Transformation: T(\alpha v + \beta w) = \alpha T(v) + \beta T(w)

// TODO: practice find transformation given transformation in two basis

Theorem: \text{dim}(\text{Img}(T_A)) + \text{dim}(\text{Ker}(T_A)) = n (where n is the dimension of starting space)

T^{m \times n} : \mathbb{R}^n \rightarrow \mathbb{R}^m injective not injective
surjective rank(T) = n rank(T) = m (m<n)
not surjective rank(T) = n (m>n) rank(T) < m, n

Think the above table in the following way:

Theorem: For any matrix A \subseteq \mathbb{R}^{m \times n}

Finding Change of basis:

  1. given 2 basis B, C, find \{\begin{pmatrix} 0\\ 1\\ \end{pmatrix}_B, \begin{pmatrix} 1\\ 0\\ \end{pmatrix}_B\},'s representation in standard basis
  2. convert that language to basis of C
  3. make that a column

Orthogonal Basis

Finding Orthogonal Complement: (finding null space)

  1. Find a basis of V
  2. Find a set of linear independent vectors W that are perpendicular to \forall v \in V
  3. \text{span}(W) = V^\perp

Ax = b has a solution iff y^T \cdot A = 0 \implies y^T \cdot b = 0

Projection to plane: let \{b_1, b_2\} be an orthogonal basis of the plane. T = \frac{b_1 \cdot v}{b_1 b_1} b_1 + \frac{b_2 \cdot v}{b_2 b_2} b_2

Gram-schmidt orthogonal: any subset of \mathbb{R}^n has an orthogonal basis.

Gram-schmidt process

  1. take a basis v
  2. add a b basis into collection (containing b_1, ..., b_n) so that it is perpendicular to b_1, ..., b_n, that spans \text{span}(\{b_1, ..., b_n, v\})
    • For the first projection: b_2 = (v_2 - \text{proj}_{b_1}(v_2)) = v_2 - \frac{b_1 \cdot v_2}{b_1 \cdot b_1} b_1 \perp b_1
    • For later projections: b = v - \text{proj}_{\text{span}(b_1, ..., b_n)}(v) = v - \frac{b_1 \cdot v}{b_1 \cdot b_1}b_1 - \frac{b_2 \cdot v}{b_2 \cdot b_2}b_2 - ... - \frac{b_n \cdot v}{b_n \cdot b_n}b_n
  3. goto 1 til it is done

Orthogonal Matrix: consists of orthonormal vectors

For Q \in \mathbb{R}^{n \times n}, the followings are equivalent

Determinant

Properties of Determinant:

Determinant rank: \text{det-rank}(A) = \max(\{k | B^{k \times k} = \text{submatrix}(A) \land \det B \neq 0\})

Cramer's rule: \{\frac{\det A_i(b)}{\det A} | 1 \leq i \leq n\} is the unique solution to any system Ax = b if A is invertible // TODO: proof

Cofactor and Adjoint:

Geometric Colinear

Eigenvalue and Eigenvector

Eigenvalue and (non-trivial) Eigenvector: \lambda \in \mathbb{R} is a eigenvalue of the matrix A^{n \times n} if (\exists v \in \mathbb{R}^n)(v \neq 0 \land Av = \lambda v) where v is a eigenvector

Eigenspace (E_\lambda correspond to A): nullspace of (A - \lambda I), all the eigenvector correspond to A, \lambda // TODO: look at it

Multiplicity:

// TODO: understand below Properties of Eigenvector and Eigenvalue

  1. 0 is an eigenvalue of A iff A is singular (determinant zero, squashing space)
  2. If \lambda_1, \lambda_2, ..., \lambda_n are different eigenvalues of A \in \mathbb{R}^{n \times n} and v_1, v_2, ..., v_n are non-trivial eigenvectors corresponding to them, then v_1, v_2, ..., v_n are linearly independent // TODO: proof
  3. By above, A^{n \times n} can have at most n different eigenvalues (by algebratic multiplicities)
  4. If \lambda is an eigenvalue of A, then \lambda^k is an eigenvalue of A^k (by induction)

Trace: trace of A^{n \times n} is the sum of its entries on the diagonal (\text{tr}(A) = \sum_i a_{ii} given A = (a_{ij})_{1 \leq i, j, \leq n})

  1. Assume that together with multiplicities A^{n \times n} has n eigenvalues, then \text{tr}(A) = \lambda_1 + \lambda_2 + ... + \lambda_n and \det A = \lambda_1 \cdot \lambda_2 \cdot ... \cdot \lambda_n // TODO: 3b1b proof

  2. A \simeq B \implies \text{tr}(A) = \text{tr}(B) (but not vise-versa)

Diagonalization

Similar: A^{n \times n} \simeq B^{n \times n} \iff (\exists S^{n \times n})(S^{-1}AS = B)

Theorem: A^{n \times n} \simeq B^{n \times n} \implies

  1. \det A = \det B
  2. A \text{ is invertible} \iff B \text{ is invertible}
  3. \text{rank}(A) = \text{rank}B
  4. A, B have the same characteristic polynomial
  5. A, B have the same eigenvalues
  6. (\forall m \geq 0)(A^m \simeq B^m) // TODO: proof above

Diagonalizable Matrix: A^{n \times n} is diagonalizable if (\exists \text{diagonal matrix} D)(A \simeq D)

Finding Diagonalization:

  1. find each \lambda, E_{\lambda}, when finding \lambda pay attention to number of solutions a \lambda has. (\lambda_1, \lambda_2 = m \pm \sqrt{m^2 - p})
  2. find basis and dimension of eigenspace
  3. if for some \lambda, \text{mult}_{g} \leq \text{mult}_{a} then not diagonalizable
  4. otherwise S = (v_1 | v_2 | ... | v_n) and D = \text{eigenvalues in the diagonal}
  5. A^k = SD^kS^{-1}

Application: given diagonalizable A^{n \times n}, calculate A^k

  1. find \lambda_1, \lambda_2, ..., \lambda_n
  2. find v_1, v_2, ..., v_n
  3. Put v_1, v_2, ..., v_n into columns of S
  4. find S^{-1}
  5. calculate D = S^{-1}AS
  6. then A = SDS^{-1}
  7. then A^k = SD^kS^{-1}

// TODO: Connections between and properties of the four fundamental subspaces // TODO: kernel and image: be able to describe algebraically and geometrically, connections to the four fundamental subspaces // TODO: connections of kernel/image of a projection and the fundamental subspaces of the corresponding matrix

Table of Content