Lecture_002_Linear_Algbra_and_Vector_Calculus

Before preceded, you need to understand:

  1. Vectors, Vector Space, and Euclidean Vector Space
  2. Functions and see Functions (and Linear Transformations) as Vectors in Vector Space
  3. Dot Product, Projections, Cross Product
  4. Linear Transformation as matrix
  5. Span, basis, linear independence, Gram-Schmidt Algorithm (QR decomposition)
  6. Vector Field and 3D calculus

Vector

Function Space are Vector Space

Function Space are Vector Space

Norm: a non-negative quantity of elements of a vector space

Euclidean: length preserved by rotation, translation, reflection of space. \|u\| := \sqrt{u_1^2 + ... + u_n^2} only if the vector is in orthonormal basis.

L^2 norm: magnitude of a function. \|f\| := \sqrt{\int_a^b f(x)^2 dx} for [a, b] is the function's domain

L^2 dot (inner) product: how well two functions "line up". \langle f, g \rangle := \int_a^b f(x)g(x) dx

Define new norm as L2 norm of derivative. This captures the "interesting" of the image instead of brightness.

Define new norm as L2 norm of derivative. This captures the "interesting" of the image instead of brightness.

Dot product of image vector captures similarity

Dot product of image vector captures similarity

Linear Maps (Transformation)

Advantage of Linear Maps:

Linear Map: fix the center and line intervals are the same (Geometric Definition)

Linear Map: fix the center and line intervals are the same (Geometric Definition)

Affine Function: (A non-linear Map) because it does not go through origin

Affine Function: (A non-linear Map) because it does not go through origin

Derivative and Integrals are Linear: Notes on Linear Transformations

Orthonormal Basis

Orthonormal Basis: e_1 \cdot e_2 = \langle{e_i, e_j}\rangle = \begin{cases}1 \text{ if } i = j\\ 0 \text{ otherwise}\\\end{cases}

  1. unit length
  2. mutually orthogonal

The Euclidean Distance preserves in orthonormal basis

The Euclidean Distance preserves in orthonormal basis

Fourier Transform

Approximated Signal: can be expressed using fourier basis of sinusoids.

Fourier Analysis (Decomposition): the process of transforming the original function (signal) into fourier series.

Fourier Composition: compose fourier series into approximated signal.

Fourier Decomposition used in computer graphics

Fourier Decomposition used in computer graphics

// QUESTION: do we use Discrete Time Fouier Series instead? // QUESTION: is Fourier Decomposition the same as Fourier Analysis?

Cross Product

Definition: \sqrt{\det(u, v, u \times v)} = \|u\|\|v\|\sin \theta, u \times v := \begin{bmatrix} u_2v_3 - u_3v_2\\ u_3v_1 - u_1v_3\\ u_1v_2 - u_2v_1\\ \end{bmatrix}

Cross Product can be used in rotation 90 degree with respect to normal vector

Cross Product can be used in rotation 90 degree with respect to normal vector

Cross Product as Matrix Multiplication

Cross Product as Matrix Multiplication

Determinant: a triple product

Determinant: a triple product

Jacobi Identity

Jacobi Identity

Lagrange's Identity: u \times (v \times w) = v(u \cdot w) - w(u \cdot v)

Derivatives

Taylor Series

Taylor Series

Directional Derivatives

Directional Derivatives

Gradient: \triangledown f(x) \cdot u = D_u f(x)

Approximate Multivariable Function using Gradient as Best Linear Approximation

Approximate Multivariable Function using Gradient as Best Linear Approximation

Example 1: taking partial derivative \frac{\partial}{\partial u_k} of f := u^T v for 1 \leq k \leq n

\begin{align*} &\frac{\partial}{\partial u_k} u^Tv\\ &= \frac{\partial}{\partial u_k} \sum_{i = 1}^n u_iv_i\\ &= \sum_{i = 1}^n \frac{\partial}{\partial u_k}u_iv_i\\ &= \sum_{i = 1}^n \begin{cases} 0 \text{ if } i \neq k\\ v_k \text{ otherwise}\\ \end{cases}\\ &= v_k\\ & \implies \triangledown_u (u^Tv) = \begin{bmatrix} v_1\\ ...\\ v_n\\ \end{bmatrix} = v \end{align*}

Matrix Derivative

Matrix Derivative

Example 2: taking gradient of functions that takes functions as input: F(f) := \langle f, g \rangle := \int_a^b f(x)g(x) dx. We get \triangledown F = g

Intuition of klzzwxh:0032

Intuition of \triangledown \langle f, g \rangle

Calculation of klzzwxh:0034

Calculation of \triangledown \langle f \rangle^2

Divergence and Curl as Derivative of Vector Field

Divergence and Curl as Derivative of Vector Field

Divergence: \triangledown \cdot X := \sum_{i = 1}^n \frac{\partial X_i}{\partial u_i} where \triangledown = (\frac{\partial}{\partial u_1}, ..., \frac{\partial}{\partial u_n}), X(u) = (X_1(u), ..., X_n(u))

Curl: \triangledown \times X := \begin{bmatrix} \frac{\partial X_3}{\partial u_2} - \frac{\partial X_2}{\partial u_3}\\ \frac{\partial X_1}{\partial u_3} - \frac{\partial X_3}{\partial u_1}\\ \frac{\partial X_2}{\partial u_1} - \frac{\partial X_1}{\partial u_2}\\ \end{bmatrix} where \triangledown = (\frac{\partial}{\partial u_1}, \frac{\partial}{\partial u_2}, \frac{\partial}{\partial u_3}), X(u) = (X_1(u), X_2(u), X_3(u))

Divergence of klzzwxh:0041 is the same as curl of 90-degree rotation of klzzwxh:0042

Divergence of X is the same as curl of 90-degree rotation of X

Change Divergence to Curl in Fluid Simulation creates different results

Change Divergence to Curl in Fluid Simulation creates different results

Laplacian

Laplacian: Operator used to encode concavity (concave up) for multivariable equation. Used in

\begin{align*} \triangle f &:= \triangledown \cdot \triangledown f = \text{div}(\triangledown f) \tag{divergence of gradient}\\ \triangle f &:= \sum_{i = 1}^n \frac{\partial^2 f}{\partial x_i^2} \tag{sum of 2nd partial derivative}\\ \triangle f &:= - \triangledown_f(\frac{1}{2} \|\triangledown f\|^2) \tag{gradient of Dirichlet energy}\\ \triangle f &:= &\tag{graph Laplacian}\\ \triangle f &:= &\tag{variation of surface area}\\ \triangle f &:= &\tag{trace of Hessian}\\ \end{align*}

Hessian

Hessian is a symmetric matrix that gives the quadratic component of derivative approximation

Hessian is a symmetric matrix that gives the quadratic component of derivative approximation

\begin{align*} (\triangledown^2 f) u &:= D_u (\triangledown f)\\ \triangledown^2 f &:= \begin{bmatrix} \frac{\partial^2 f}{\partial x_1 \partial x_1} & ... & \frac{\partial^2 f}{\partial x_1 \partial x_n}\\ ... & ... & ...\\ \frac{\partial^2 f}{\partial x_n \partial x_1} & ... & \frac{\partial^2 f}{\partial x_n \partial x_n}\\ \end{bmatrix} \end{align*}

Table of Content