Transformation on camera is the same as transformation on all triangles
Rotation: inverse rotation is transpose
Translation: inverse translation is negative translation
Clipping
Clipping: eliminating triangles not in view frustum to save rasterizing primitives
discarding fragments is expensive
make sense to toss out whole primitives
Why near/far clipping planes?
hard to rasterize a vertices both in front and behind camera
we don't have infinite precision of depth buffer
To again get perspective: copy z to w in homogeneous coordinate.
Screen Transformation
Translate 2D viewing plane to pixel coordinates
reflect about x-axis
translate by (1, 1)
scale by (W / 2, H / 2)
Color Interpolation
In 1D: since we have equation \hat{f}(t) = (1 - t)f_i + tf_j, we can think of this as a linear combination of two functions.
Linear Interpolation in 2D:
we have a 2D function in 3D space as image above
the function is \hat{f}(x, y) = ax + by + c
to interpolate, we need to find coefficient such that the function matches the sample values at the sample points \hat{f}(x_n, y_n) = f_n, n \in \{i, j, k\}
it is used to interpolate attributes associated with vertices
the distance is already calculated in the triangular half-plane test (for example, to check whether point P is in triangle ABC, we check whether \overrightarrow{AP} \times \overrightarrow{AB} is positive, and this check gives distance between line AB and point P that can be used as Barycentric Coordinates)
Perspective Correct Interpolation: we interpolate attribute \phi
compute depth z at each vertex
interpolate \frac{1}{z} and \frac{\phi}{z} using 2D barycentric coordinates to give perspective
divide interpolated \frac{\phi}{z} by interpolated \frac{1}{z}
Texture Mapping
Texture Mapping
for color
for wetness attribute
for shinny
for normal map
for displacement mapping
for baked ambient occlusion
for reflection bulb
etc...
Given a model with UV:
for each pixel in rasterized image (screen space)
interpolate (u, v) coordinates accross triangle
sample texture at interpolated (u, v)
set color of fragment to sampled texture value
Magnification: camera too close to object
Problem: single pixel on screen maps to less than a pixel in texture
Solution: interpolate value at pixel center
Minification: camera too far to object
Problem: single pixel on screen maps to large region in texture
Solution: need to compute texture average over pixel (bug averaging kills performance)
Prefiltering: compute average in build time, not run time, we down sample texture in build time for multiple regions
Mip Map
Mip Map: a specific prefiltering technique
Pipeline:
from screen space (x, y) to barycentric coordinates
using barycentric coordinates to interpolate (u, v) stored in vertices
approximate \frac{du}{dx}, \frac{du}{dy}, \frac{dv}{dx}, \frac{dv}{dy} by taking differences of screen-adjacent samples and compute mip map level d
convert normalized texture coordinate (u, v) \in [0, 1] to pixel locations in texture image (U, V) \in [W, H].
determine address of pixels for filter (8 neighbors for trilinear)