Before preceded, you need to understand:
Vector Space is useful because functions are infinite dimensional vectors, therefore manifold (signed distance functions) are vectors, spherical harmonics are vectors...
Norm: a non-negative quantity of elements of a vector space
\|\overrightarrow{u}\| \geq 0
\|\overrightarrow{u}\| \iff u = \overrightarrow{0}
\|c\overrightarrow{u}\| = \|c\| \cdot \|\overrightarrow{u}\|
\|\overrightarrow{u}\| + \|\overrightarrow{v}\| \geq \|\overrightarrow{u} + \overrightarrow{v}\|
Euclidean norm is any notion of length preserved by rotations, translations, reflections of space.
(Euclidean) Inner product determines a (Euclidean) norm: \|v\| = \sqrt{v \cdot v}. Euclidean inner product is
Euclidean: length preserved by rotation, translation, reflection of space. \|u\| := \sqrt{u_1^2 + ... + u_n^2} only if the vector is in orthonormal basis.
L^2 norm: magnitude of a function. \|f\| := \sqrt{\int_a^b f(x)^2 dx} for [a, b] is the function's domain
Note that L^2 norm does not strictly satisfy the definition of norm because consider the following function f(x) = \begin{cases} 0 & \text{if} 0 < x \leq 1\\ 1 & \text{if} x = 0\\ \end{cases} Above function is not a "zero function" but the L2 norm is zero, which breaks the definition.
L^2 dot (inner) product: how well two functions "line up". \langle f, g \rangle := \int_a^b f(x)g(x) dx
Any function that satisfy the definition of dot product are dot product. \begin{align*}u \cdot v = v \cdot u\\ u \cdot u \geq 0\\ u \cdot u = 0 \iff u = 0\\ cu \cdot v = c(u \cdot v)\\ (u + v)\cdot w = u \cdot w + v \cdot w\\\end{align*}
Advantage of Linear Maps:
Computational Cheap
Capture Geometric Transformation (rotation, translation, scaling)
All maps can be approximated as linear maps using Taylor's Series
Affine function preserves convex combinations: if we have weights w_1, ..., w_n such that \sum_{i = 1}^n w_i = 1, then for affine function f, we have:
Derivative and Integrals are Linear: Notes on Linear Transformations
The image of a function is (losely?) the range of the function.
Orthonormal Basis: e_1 \cdot e_2 = \langle{e_i, e_j}\rangle = \begin{cases}1 \text{ if } i = j\\ 0 \text{ otherwise}\\\end{cases}
Approximated Signal: can be expressed using fourier basis of sinusoids.
Fourier Analysis (Decomposition): the process of transforming the original function (signal) into fourier series.
Fourier Composition: compose fourier series into approximated signal.
// QUESTION: do we use Discrete Time Fouier Series instead?
Definition: \sqrt{\det(u, v, u \times v)} = \|u\|\|v\|\sin \theta, u \times v := \begin{bmatrix} u_2v_3 - u_3v_2\\ u_3v_1 - u_1v_3\\ u_1v_2 - u_2v_1\\ \end{bmatrix}
For axis-angle rotation of any degree, we can use projection
... and some anti-determinant matrix identities for cross product ...
Since we can represent linear maps using matrix, taking the determinant of the matrix represents the change in volume of a unite cube after applying the transformation. The sign tells us whether the transformation is flipped.
Lagrange's Identity: u \times (v \times w) = v(u \cdot w) - w(u \cdot v)
Ordinary differential equations can bs used to smooth lines.
Gradient: \triangledown f(x) \cdot u = D_u f(x)
Directional Derivative of a Function: just take derivative along each input variable and mix them with weights provided by a unit vector that indicates a direction.
Example 1: taking partial derivative \frac{\partial}{\partial u_k} of f := u^T v for 1 \leq k \leq n
Example 2: taking gradient of functions that takes functions as input: F(f) := \langle f, g \rangle := \int_a^b f(x)g(x) dx. We get \triangledown F = g
Divergence: \triangledown \cdot X := \sum_{i = 1}^n \frac{\partial X_i}{\partial u_i} where \triangledown = (\frac{\partial}{\partial u_1}, ..., \frac{\partial}{\partial u_n}), X(u) = (X_1(u), ..., X_n(u))
Curl: \triangledown \times X := \begin{bmatrix} \frac{\partial X_3}{\partial u_2} - \frac{\partial X_2}{\partial u_3}\\ \frac{\partial X_1}{\partial u_3} - \frac{\partial X_3}{\partial u_1}\\ \frac{\partial X_2}{\partial u_1} - \frac{\partial X_1}{\partial u_2}\\ \end{bmatrix} where \triangledown = (\frac{\partial}{\partial u_1}, \frac{\partial}{\partial u_2}, \frac{\partial}{\partial u_3}), X(u) = (X_1(u), X_2(u), X_3(u))
Laplacian: Operator used to encode concavity (concave up) for multivariable equation. Used in
Fourier transform, frequency decomposition
used to define model in partial differential equations (Laplace, heat, wave equation)
characteristics of geometry
Table of Content