Lecture_002_Linear_Algbra_and_Vector_Calculus

Before preceded, you need to understand:

  1. Vectors, Vector Space, and Euclidean Vector Space
  2. Functions and see Functions (and Linear Transformations) as Vectors in Vector Space
  3. Dot Product, Projections, Cross Product
  4. Linear Transformation as matrix
  5. Span, basis, linear independence, Gram-Schmidt Algorithm (QR decomposition)
  6. Vector Field and 3D calculus

Vector Space is useful because functions are infinite dimensional vectors, therefore manifold (signed distance functions) are vectors, spherical harmonics are vectors...

Vector

Function Space are Vector Space

Function Space are Vector Space

Norm: a non-negative quantity of elements of a vector space

Euclidean norm is any notion of length preserved by rotations, translations, reflections of space.

(Euclidean) Inner product determines a (Euclidean) norm: \|v\| = \sqrt{v \cdot v}. Euclidean inner product is

\langle u, v \rangle = u \cdot v = |u||v|\cos(\theta)

Euclidean: length preserved by rotation, translation, reflection of space. \|u\| := \sqrt{u_1^2 + ... + u_n^2} only if the vector is in orthonormal basis.

L^2 norm: magnitude of a function. \|f\| := \sqrt{\int_a^b f(x)^2 dx} for [a, b] is the function's domain

L0, L0.5, L1, L2, L-inf Norm with Equal Distance in 3D

L0, L0.5, L1, L2, L-inf Norm with Equal Distance in 3D

Note that L^2 norm does not strictly satisfy the definition of norm because consider the following function f(x) = \begin{cases} 0 & \text{if} 0 < x \leq 1\\ 1 & \text{if} x = 0\\ \end{cases} Above function is not a "zero function" but the L2 norm is zero, which breaks the definition.

L^2 dot (inner) product: how well two functions "line up". \langle f, g \rangle := \int_a^b f(x)g(x) dx

Any function that satisfy the definition of dot product are dot product. \begin{align*}u \cdot v = v \cdot u\\ u \cdot u \geq 0\\ u \cdot u = 0 \iff u = 0\\ cu \cdot v = c(u \cdot v)\\ (u + v)\cdot w = u \cdot w + v \cdot w\\\end{align*}

Define new norm as L2 norm of derivative. This captures the "interesting" of the image instead of brightness.

Define new norm as L2 norm of derivative. This captures the "interesting" of the image instead of brightness.

Dot product of image vector captures similarity

Dot product of image vector captures similarity

Linear Maps (Transformation)

Advantage of Linear Maps:

Linear Map: fix the center and line intervals are the same (Geometric Definition)

Linear Map: fix the center and line intervals are the same (Geometric Definition)

Affine Function: (A non-linear Map) because it does not go through origin

Affine Function: (A non-linear Map) because it does not go through origin

Affine function preserves convex combinations: if we have weights w_1, ..., w_n such that \sum_{i = 1}^n w_i = 1, then for affine function f, we have:

f(w_1x_1 + ... + w_nx_n) = \sum_i w_i f(x_i)

Derivative and Integrals are Linear: Notes on Linear Transformations

The image of a function is (losely?) the range of the function.

Orthonormal Basis

Orthonormal Basis: e_1 \cdot e_2 = \langle{e_i, e_j}\rangle = \begin{cases}1 \text{ if } i = j\\ 0 \text{ otherwise}\\\end{cases}

  1. unit length
  2. mutually orthogonal

The Euclidean Distance preserves in orthonormal basis

The Euclidean Distance preserves in orthonormal basis

Fourier Transform

Approximated Signal: can be expressed using fourier basis of sinusoids.

Fourier Analysis (Decomposition): the process of transforming the original function (signal) into fourier series.

Fourier Composition: compose fourier series into approximated signal.

Fourier Decomposition used in computer graphics

Fourier Decomposition used in computer graphics

// QUESTION: do we use Discrete Time Fouier Series instead?

Cross Product

Definition: \sqrt{\det(u, v, u \times v)} = \|u\|\|v\|\sin \theta, u \times v := \begin{bmatrix} u_2v_3 - u_3v_2\\ u_3v_1 - u_1v_3\\ u_1v_2 - u_2v_1\\ \end{bmatrix}

Cross Product can be used in rotation 90 degree with respect to normal vector

Cross Product can be used in rotation 90 degree with respect to normal vector

For axis-angle rotation of any degree, we can use projection

Cross Product as Matrix Multiplication

Cross Product as Matrix Multiplication

v \times u = - u \times v

... and some anti-determinant matrix identities for cross product ...

Determinant: a triple product

Determinant: a triple product

Since we can represent linear maps using matrix, taking the determinant of the matrix represents the change in volume of a unite cube after applying the transformation. The sign tells us whether the transformation is flipped.

Jacobi Identity

Jacobi Identity

Lagrange's Identity: u \times (v \times w) = v(u \cdot w) - w(u \cdot v)

Derivatives

Taylor Series

Taylor Series

Ordinary differential equations can bs used to smooth lines.

Directional Derivatives

Directional Derivatives

Gradient: \triangledown f(x) \cdot u = D_u f(x)

Directional Derivative of a Function: just take derivative along each input variable and mix them with weights provided by a unit vector that indicates a direction.

Approximate Multivariable Function using Gradient as Best Linear Approximation

Approximate Multivariable Function using Gradient as Best Linear Approximation

Example 1: taking partial derivative \frac{\partial}{\partial u_k} of f := u^T v for 1 \leq k \leq n

\begin{align*} &\frac{\partial}{\partial u_k} u^Tv\\ &= \frac{\partial}{\partial u_k} \sum_{i = 1}^n u_iv_i\\ &= \sum_{i = 1}^n \frac{\partial}{\partial u_k}u_iv_i\\ &= \sum_{i = 1}^n \begin{cases} 0 \text{ if } i \neq k\\ v_k \text{ otherwise}\\ \end{cases}\\ &= v_k\\ & \implies \triangledown_u (u^Tv) = \begin{bmatrix} v_1\\ ...\\ v_n\\ \end{bmatrix} = v \end{align*}

Matrix Derivative

Matrix Derivative

Example 2: taking gradient of functions that takes functions as input: F(f) := \langle f, g \rangle := \int_a^b f(x)g(x) dx. We get \triangledown F = g

Intuition of klzzwxh:0037

Intuition of \triangledown \langle f, g \rangle

Calculation of klzzwxh:0039

Calculation of \triangledown \langle f \rangle^2

Divergence and Curl as Derivative of Vector Field

Divergence and Curl as Derivative of Vector Field

Divergence: \triangledown \cdot X := \sum_{i = 1}^n \frac{\partial X_i}{\partial u_i} where \triangledown = (\frac{\partial}{\partial u_1}, ..., \frac{\partial}{\partial u_n}), X(u) = (X_1(u), ..., X_n(u))

Curl: \triangledown \times X := \begin{bmatrix} \frac{\partial X_3}{\partial u_2} - \frac{\partial X_2}{\partial u_3}\\ \frac{\partial X_1}{\partial u_3} - \frac{\partial X_3}{\partial u_1}\\ \frac{\partial X_2}{\partial u_1} - \frac{\partial X_1}{\partial u_2}\\ \end{bmatrix} where \triangledown = (\frac{\partial}{\partial u_1}, \frac{\partial}{\partial u_2}, \frac{\partial}{\partial u_3}), X(u) = (X_1(u), X_2(u), X_3(u))

Divergence of klzzwxh:0046 is the same as curl of 90-degree rotation of klzzwxh:0047

Divergence of X is the same as curl of 90-degree rotation of X

Change Divergence to Curl in Fluid Simulation creates different results

Change Divergence to Curl in Fluid Simulation creates different results

Laplacian

Laplacian: Operator used to encode concavity (concave up) for multivariable equation. Used in

\begin{align*} \triangle f &:= \triangledown \cdot \triangledown f = \text{div}(\triangledown f) \tag{divergence of gradient}\\ \triangle f &:= \sum_{i = 1}^n \frac{\partial^2 f}{\partial x_i^2} \tag{sum of 2nd partial derivative}\\ \triangle f &:= - \triangledown_f(\frac{1}{2} \|\triangledown f\|^2) \tag{gradient of Dirichlet energy}\\ \triangle f &:= &\tag{graph Laplacian}\\ \triangle f &:= &\tag{variation of surface area}\\ \triangle f &:= &\tag{trace of Hessian}\\ \end{align*}

Hessian

Hessian is a symmetric matrix that gives the quadratic component of derivative approximation

Hessian is a symmetric matrix that gives the quadratic component of derivative approximation

\begin{align*} (\triangledown^2 f) u &:= D_u (\triangledown f)\\ \triangledown^2 f &:= \begin{bmatrix} \frac{\partial^2 f}{\partial x_1 \partial x_1} & ... & \frac{\partial^2 f}{\partial x_1 \partial x_n}\\ ... & ... & ...\\ \frac{\partial^2 f}{\partial x_n \partial x_1} & ... & \frac{\partial^2 f}{\partial x_n \partial x_n}\\ \end{bmatrix} \end{align*}

Table of Content