Transformation on camera is the same as transformation on all triangles
Rotation: inverse rotation is transpose
Translation: inverse translation is negative translation
Clipping
View Frustum: a near and a far plane
Clipping: eliminating triangles not in view frustum to save rasterizing primitives
discarding fragments is expensive
make sense to toss out whole primitives
Clipping Half Overlapped Triangles by Splitting Shape to Triangles
Why near/far clipping planes?
hard to rasterize a vertices both in front and behind camera
we don't have infinite precision of depth buffer
Z-Fighting Effect is Larger with no Enough Precision in Depth Buffer (don't set near, far plane too big)
Non-Perspective Frustum Transformation: it is not perspective because it warpped space that aligns to camera's perspective
To again get perspective: copy z to w in homogeneous coordinate.
Full Perspective Matrix: takes view frustum and projection into account
Screen Transformation
Translate 2D viewing plane to pixel coordinates
reflect about x-axis
translate by (1, 1)
scale by (W / 2, H / 2)
Color Interpolation
Color Interpolation: we want to use vertex color to interpolate color inside triangle
Linear interpolation in 1D
Linear interpolation in 1D is Linear Combination of Two Functions
In 1D: since we have equation \hat{f}(t) = (1 - t)f_i + tf_j, we can think of this as a linear combination of two functions.
Linear interpolation in 2D
Linear Interpolation in 2D:
we have a 2D function in 3D space as image above
the function is \hat{f}(x, y) = ax + by + c
to interpolate, we need to find coefficient such that the function matches the sample values at the sample points \hat{f}(x_n, y_n) = f_n, n \in \{i, j, k\}
it is used to interpolate attributes associated with vertices
the distance is already calculated in the triangular half-plane test (for example, to check whether point P is in triangle ABC, we check whether \overrightarrow{AP} \times \overrightarrow{AB} is positive, and this check gives distance between line AB and point P that can be used as Barycentric Coordinates)
Triangular Half-plane Test
We should not interpolate in screen space, but in 3D space. How to solve this problem?
Perspective Incorrect Interpolation: compute barycentric coordinates using 2D coordinates leads to derivative discontinuity
Perspective Correct Interpolation: we interpolate attribute \phi
compute depth z at each vertex
interpolate \frac{1}{z} and \frac{\phi}{z} using 2D barycentric coordinates to give perspective
divide interpolated \frac{\phi}{z} by interpolated \frac{1}{z}
normal map and displacement map
Texture Mapping
Texture Mapping
for color
for wetness attribute
for shinny
for normal map
for displacement mapping
for baked ambient occlusion
for reflection bulb
etc...
Texture Coordinates
Periodic Texture Coordinates
Given a model with UV:
for each pixel in rasterized image (screen space)
interpolate (u, v) coordinates accross triangle
sample texture at interpolated (u, v)
set color of fragment to sampled texture value
Sampled Texture Space might be warpped, it is hard to avoid aliasing
Solution to Magnification
Magnification: camera too close to object
Problem: single pixel on screen maps to less than a pixel in texture
Solution: interpolate value at pixel center
Aliasing due to minification
Ideal minification result
Minification: camera too far to object
Problem: single pixel on screen maps to large region in texture
Solution: need to compute texture average over pixel (bug averaging kills performance)
Prefiltering: compute average in build time, not run time, we down sample texture in build time for multiple regions
Mip Map
Mip Map: a specific prefiltering technique
MIP map (L.Williams 83) store prefiltered image at every possible scale
MIP map storage cost is low
Calculating MIP Map Level by estimating region cover using partial derivative
Different MIP Map Levels
MIP Map all level 1
MIP Map all level 4
But we don't want jumps between MIP Map levels
MIP Map blending
Tri-linear Interpolation: interpolation after interpolation
Isotropic Filtering (Trilinear) vs. Anisotropic Filtering
Pipeline:
from screen space (x, y) to barycentric coordinates
using barycentric coordinates to interpolate (u, v) stored in vertices
approximate \frac{du}{dx}, \frac{du}{dy}, \frac{dv}{dx}, \frac{dv}{dy} by taking differences of screen-adjacent samples and compute mip map level d
convert normalized texture coordinate (u, v) \in [0, 1] to pixel locations in texture image (U, V) \in [W, H].
determine address of pixels for filter (8 neighbors for trilinear)