Invertible: if both A, B \in \mathbb{R}^{n \times n} are invertible, then AB is invertible and (AB)^{-1} = B^{-1}A^{-1}
inverse of matrix \begin{pmatrix} a & b\\ c & d\\ \end{pmatrix} is \frac{1}{ad-bc} \begin{pmatrix} d & -b\\ -c & a\\ \end{pmatrix} if ad - bc \neq 0
invertible iff
if A \in \mathbb{R}^{n \times n} is invertible, then A^k is invertible and (A^k)^{-1} = (A^{-1})^k = A^{-k}
Transpose:
A \text{ is invertible } \implies A^\intercal \text{ is invertible}
(AB)^T = B^TA^T
Theorem: A \in \mathbb{R}^{n \times n}, the following are equivalent
A is invertible
Ax = b has a unique solution for every b \in \mathbb{R}^n (A is Non-Singular Matrix)
The only solution for Ax = 0 is x = 0
Using row operation on Ax = 0 can reach \begin{pmatrix} x_1 & 0 & 0\\ 0 & x_2 & 0\\ 0 & 0 & x_3\\ \end{pmatrix} = 0
Using row operation on Ax = b can reach \begin{pmatrix} x_1 & 0 & 0\\ 0 & x_2 & 0\\ 0 & 0 & x_3\\ \end{pmatrix} = \text{ something}
A is a product of elementary matrices
Gauss Jordan Method:
Lower Triangular Matrix: \begin{pmatrix} \circ & 0 & 0 & 0\\ x & \circ & 0 & 0\\ x & x & \circ & 0\\ x & x & x & \circ\\ \end{pmatrix}
Upper Triangular Matrix: \begin{pmatrix} \circ & x & x & x\\ 0 & \circ & x & x\\ 0 & 0 & \circ & x\\ 0 & 0 & 0 & \circ\\ \end{pmatrix}
LU factorization: Every invertible matrix A can be written as a product of a lower and an upper triangular matrix A = LU
E_{a, b} is elementary matrix that uses row b to modify row a.
A = LU (L^{-1}A = U) where L^{-1} = E_0E_1E_2...E_{n-1} and U is upper triangular
Using LU to find x: to solve A \overrightarrow{x} = \overrightarrow{b}, \overrightarrow{a} = U^{-1}(L^{-1}\overrightarrow{b})
LU factorization is not unique
To Find LU:
Subspace Theorem: If V is a vector space, then W is a subspace of V iff
// TODO: is a subspace always a vector space?
Trivial Linear Combination: all constants are zero Trivial Subspace: no space or full space
Linear independence: a set of vectors is linearly independent iff c_1v_1 + c_2v_2 + ... + c_nv_n = 0 \implies c_1 = c_2 = ... = c_n = 0 (or CV = 0 \implies C = \overrightarrow{0})
Let w \in \text{span}(V), then \{w\} \cup V is linearly dependent. (Let w \notin \text{span}(V), then \{w\} \cup V is linearly independent.)
If V is linearly dependent, there exists w \in V such that w \in \text{span}(V - \{w\})
v_1, v_2, ..., v_n \in V are linearly independent iff \forall w \in \text{span}(V), w can be expressed as a unique linear combination of V.
If v_1, v_2, ..., v_n are linearly independent, but v_1, v_2, ..., v_n, m are linearly dependent, then m \in \text{span}(\{v_1, ..., v_n\})
// TODO: prove the last one
Let A \in \mathbb{R}^{n \times n}, then the followings are equivalent:
A is invertible
N(A) = \{0\} (nullspace)
column space of A are linearly independent
rows of A are linearly independent
Ax = b has a unique solution for every b \in \mathbb{R}^n (A is Non-Singular Matrix)
The only solution for Ax = 0 is x = 0
Using row operation on Ax = 0 can reach \begin{pmatrix} x_1 & 0 & 0\\ 0 & x_2 & 0\\ 0 & 0 & x_3\\ \end{pmatrix} = 0
Using row operation on Ax = b can reach \begin{pmatrix} x_1 & 0 & 0\\ 0 & x_2 & 0\\ 0 & 0 & x_3\\ \end{pmatrix} = \text{ something}
A is a product of elementary matrices
// TODO: what is demension of a matrix?
// TODO: why orthogonal matrix transpose is its inverse
\text{row-rank}(A) = \text{column-rank}(A)
Gaussian Elimination: let A \rightarrow U \rightarrow R, then RS(A) = RS(U) = RS(R) (imagine vectors spans a space)
Swaping rows: swap the axis (resulting change the sign of determinant, column space change)
Multiply by constant: scale an axis by constant (resulting determinant scale by constant, column space change)
Add, Subtract: move the origin (with vector tips fixed) along a vector (resulting determinant's shape will be normalized, but volume does not, column space remain)
Row space remains by definition of linear combination
Rank Theorems:
(In summary, \text{rref} preserve rank of all space, and RS(\cdot), N(\cdot)), but not C(\cdot). Rank always preserve)
Corollary:
Find RS(A): pick non-zero row of rref(A) (begin with 1) Find C(A): pick pivot columns of rref(A) and look up in A
Finding a basis from scratch:
Given a incomplete basis, add vector so that it becomes basis:
Finding a null space:
Finding dimension of null space:
Linear Transformation: T(\alpha v + \beta w) = \alpha T(v) + \beta T(w)
// TODO: practice find transformation given transformation in two basis
Theorem: \text{dim}(\text{Img}(T_A)) + \text{dim}(\text{Ker}(T_A)) = n (where n is the dimension of starting space)
T^{m \times n} : \mathbb{R}^n \rightarrow \mathbb{R}^m | injective | not injective |
---|---|---|
surjective | rank(T) = n | rank(T) = m (m<n) |
not surjective | rank(T) = n (m>n) | rank(T) < m, n |
Think the above table in the following way:
\text{rank}(T): the number of connected dots on the range of the function
n: the number of total dots on the domain of the function
m: the number of total dots on the range of the function
each dot above represent a dimension
Theorem: For any matrix A \subseteq \mathbb{R}^{m \times n}
RS(A)^\perp = N(A) (\text{dim}(V) + \text{dim}(V^\perp) = n)
C(A)^\perp = \text{left-nullspace}(A)
Lemma: N(A) \perp RS(A)
Finding Change of basis:
Finding Orthogonal Complement: (finding null space)
Ax = b has a solution iff y^T \cdot A = 0 \implies y^T \cdot b = 0
Projection to plane: let \{b_1, b_2\} be an orthogonal basis of the plane. T = \frac{b_1 \cdot v}{b_1 b_1} b_1 + \frac{b_2 \cdot v}{b_2 b_2} b_2
in general T = \frac{b_1 \cdot v}{b_1 b_1} b_1 + \frac{b_2 \cdot v}{b_2 b_2} b_2 + ... + \frac{b_k \cdot v}{b_k b_k} b_k
in general, for matrix: P_W = P_{b_1} + P_{b_2} + ... + P_{b_k}
Gram-schmidt orthogonal: any subset of \mathbb{R}^n has an orthogonal basis.
Gram-schmidt process
Orthogonal Matrix: consists of orthonormal vectors
For Q \in \mathbb{R}^{n \times n}, the followings are equivalent
A \text{ is orthogonal}
Transpose Identity: Q^T = Q^{-1}
Properties of Determinant:
\det A \neq 0 \iff A \text{ is invertible}
\det A = \det A^T
If A invertible, then \det A^{-1} = \frac{1}{\det A}
Determinant rank: \text{det-rank}(A) = \max(\{k | B^{k \times k} = \text{submatrix}(A) \land \det B \neq 0\})
Theorem: for any A \in \mathbb{R}^{n \times n}, \text{row-rank}(A) = \text{column-rank}(A) = \text{determinant-rank}(A)
// TODO: proof
Cramer's rule: \{\frac{\det A_i(b)}{\det A} | 1 \leq i \leq n\} is the unique solution to any system Ax = b if A is invertible // TODO: proof
Cofactor and Adjoint:
A = (a_{ij})_{1 \leq i, j \leq n}
(i, j)-cofactor of A is C_{ij} = (-1)^{i+j} \det A_{ij}
Adjoint Matrix: \begin{align*} \text{adj} A = &(C_{ij})_{1 \leq i, j \leq n}^T\\ = &(C_{ji})_{1 \leq i, j \leq n}\\ = &\begin{pmatrix} C_{11} & C_{21} & ... & C_{n1}\\ C_{12} & C_{22} & ... & C_{n2}\\ ... & ... & ... & ...\\ C_{1n} & C_{2n} & ... & C_{nn}\\ \end{pmatrix} \end{align*}
Theroem: If A \in \mathbb{n \times n} is invertible, then A^{-1} = \frac{1}{\det A} \text{adj} A // TODO: proof
Geometric Colinear
2D colinear test: (a_1, a_2), (b_1, b_2), (c_1, c_2) are collinear iff \begin{vmatrix} a_1 & a_2 & 1\\ b_1 & b_2 & 1\\ c_1 & c_2 & 1\\ \end{vmatrix} = 0
3D colinear test: (a_1, a_2, a_3), (b_1, b_2, b_3), (c_1, c_2, c_3), (d_1, d_2, d_3) are in the same plane (coplanar) iff \begin{vmatrix} a_1 & a_2 & a_3 & 1\\ b_1 & b_2 & b_3 & 1\\ c_1 & c_2 & c_3 & 1\\ d_1 & d_2 & d_3 & 1\\ \end{vmatrix} = 0
2D line: line from (b_1, b_2), (c_1, c_2) can be expressed as \begin{vmatrix} x & y & 1\\ b_1 & b_2 & 1\\ c_1 & c_2 & 1\\ \end{vmatrix} = 0
3D line: line from (b_1, b_2, b_3), (c_1, c_2, c_3), (c_1, c_2, d_3) can be expressed as \begin{vmatrix} x & y & z & 1\\ b_1 & b_2 & b_3 & 1\\ c_1 & c_2 & c_3 & 1\\ d_1 & d_2 & d_3 & 1\\ \end{vmatrix} = 0
Eigenvalue and (non-trivial) Eigenvector: \lambda \in \mathbb{R} is a eigenvalue of the matrix A^{n \times n} if (\exists v \in \mathbb{R}^n)(v \neq 0 \land Av = \lambda v) where v is a eigenvector
Find Eigenvector given Eigenvalue: \begin{align*} Av &= \lambda v\\ Av - \lambda v &= 0\\ Av - \lambda I v &= 0\\ (A - \lambda I) v &= 0\\ \end{align*} // QUESTION: is it a zoom?
v \in N(A - \lambda I) \land v \neq 0 \iff v \text{ is a non-trivial eigenvector}
Find Eigenvalue given Matrix: \begin{align*} &\lambda \text{ is eigenvalue of } A\\ \iff &(A - \lambda I)x = 0 \text{ has non-trivial solution} \\ \iff &(A - \lambda I) \text{ is singular}\\ \iff &\det(A - \lambda I) = 0\\ \end{align*}
Characteristic polynomial: \det (A - \lambda I) (expanding determinant = 0 leads to find root of a polynomial = 0)
Eigenspace (E_\lambda correspond to A): nullspace of (A - \lambda I), all the eigenvector correspond to A, \lambda // TODO: look at it
Multiplicity:
Algebratic Multiplicity (\text{mult}_{a}): Algebratic Multiplicity of \lambda (as a solution to the characteristic polynomial) is its multiplicity as a root of the characteristic equation (how many non-distinct value of this solution).
Algebratic Multiplicities: a set of non-distinct solution to characteristic polynomial
Geometric Multiplicity (\text{mult}_{g}): Geometric Multiplicity of \lambda is the dimension of its eigenspace. (\text{mult}_{g}(\lambda) =_A \text{dim}(E_{\lambda}) = \text{dim}(N(A - \lambda I)))
Theorem: \text{mult}_{a} \geq \text{mult}_{g} // QUESTION: why
Diagonalizable: A^{n \times n} is diagonalizable iff \text{mult}_{a}(A) = \text{mult}_{g}(A) // QUESTION: why
Diagonalize Matrix as a scale (backward): S^{-1}AS = D \implies \text{the diagonal of } D \text{ are eigenvalues, and the column of } S \text{ are corresponding eigenvectors}
Linearly Independent Basis: Let \{\lambda_1, \lambda_2, \lambda_n\}, \{E_1, E_2, ..., E_n\}, \{B_1, B_2, ..., B_n\} are distinct eigenvalues, eigenspace, and basis of eigenspace, then all vectors together from the basis are linearly independent. (all vectors in all the basis of all eigenspace are linearly independent) // QUESTION: why, what if A is a scaling matrix?
Corollary: \text{dim}(E_1) + \text{dim}(E_2) + ... + \text{dim}(E_k) \leq n (that is \text{mult}_g(\lambda_1) + \text{mult}_g(\lambda_2) + ... + \text{mult}_g(\lambda_k) \leq n)
// TODO: understand below Properties of Eigenvector and Eigenvalue
Trace: trace of A^{n \times n} is the sum of its entries on the diagonal (\text{tr}(A) = \sum_i a_{ii} given A = (a_{ij})_{1 \leq i, j, \leq n})
Assume that together with multiplicities A^{n \times n} has n eigenvalues, then \text{tr}(A) = \lambda_1 + \lambda_2 + ... + \lambda_n and \det A = \lambda_1 \cdot \lambda_2 \cdot ... \cdot \lambda_n // TODO: 3b1b proof
A \simeq B \implies \text{tr}(A) = \text{tr}(B) (but not vise-versa)
Similar: A^{n \times n} \simeq B^{n \times n} \iff (\exists S^{n \times n})(S^{-1}AS = B)
effect of transformation A is similar to effect of transformation of A applied in some change of basis matrix S.
It is equivalence relation:
Theorem: A^{n \times n} \simeq B^{n \times n} \implies
Diagonalizable Matrix: A^{n \times n} is diagonalizable if (\exists \text{diagonal matrix} D)(A \simeq D)
Theorem: A is diagonalizable iff A has n independent eigenvectors. // TODO: proof
A is diagonalizable if (\exists S, D)(S^{-1}AS = B)
If diagonalizable, when put in diagonal form, then eigenvalue appear in the diagonal, columns are eigenvectors
Corollary: if A^{n\times n} has n real different (independent?) eigenvalues, then A is diagonalizable
Finding Diagonalization:
Application: given diagonalizable A^{n \times n}, calculate A^k
// TODO: Connections between and properties of the four fundamental subspaces // TODO: kernel and image: be able to describe algebraically and geometrically, connections to the four fundamental subspaces // TODO: connections of kernel/image of a projection and the fundamental subspaces of the corresponding matrix
Table of Content