# Lecture_006_Perspective_Projection_and_Texture_Mapping

## Transformations

Transformation on camera is the same as transformation on all triangles

• Rotation: inverse rotation is transpose

• Translation: inverse translation is negative translation

## Clipping

Clipping: eliminating triangles not in view frustum to save rasterizing primitives

• make sense to toss out whole primitives

Why near/far clipping planes?

• hard to rasterize a vertices both in front and behind camera

• we don't have infinite precision of depth buffer Z-Fighting Effect is Larger with no Enough Precision in Depth Buffer (don't set near, far plane too big) Non-Perspective Frustum Transformation: it is not perspective because it warpped space that aligns to camera's perspective

To again get perspective: copy $z$ to $w$ in homogeneous coordinate.

## Screen Transformation

Translate 2D viewing plane to pixel coordinates

2. translate by $(1, 1)$
3. scale by $(W / 2, H / 2)$

## Color Interpolation Color Interpolation: we want to use vertex color to interpolate color inside triangle

In 1D: since we have equation $\hat{f}(t) = (1 - t)f_i + tf_j$, we can think of this as a linear combination of two functions.

Linear Interpolation in 2D:

• we have a 2D function in 3D space as image above

• the function is $\hat{f}(x, y) = ax + by + c$

• to interpolate, we need to find coefficient such that the function matches the sample values at the sample points $\hat{f}(x_n, y_n) = f_n, n \in \{i, j, k\}$

\begin{bmatrix} a\\ b\\ c\\ \end{bmatrix} = \frac{1}{(x_y_i - x_iy_j)+(x_ky_j - x_jy_k)+(x_iy_k - x_ky_i)} \begin{bmatrix} f_i(y_k - y_j) + f_j(y_i - y_k) + f_k(y_j - y_i)\\ f_i(x_j - x_k) + f_j(x_k - x_i) + f_k(x_i - x_j)\\ f_i(x_ky_j - x_jy_k) + f_j(x_iy_k - x_ky_i) + f_k(x_jy_i - x_iy_j)\\ \end{bmatrix}

These picture describes the same as above complicated function. Interpolate based on the three triangular area creased by a point in triangle

Barycentric Coordinates: $\phi_i(x), \phi_j(x), \phi_k(x)$

• $\text{color}(x) = \text{color}(x_i)\phi_i + \text{color}(x_j)\phi_j + \text{color}(x_k)\phi_k$

• it is used to interpolate attributes associated with vertices

• the distance is already calculated in the triangular half-plane test (for example, to check whether point $P$ is in triangle $ABC$, we check whether $\overrightarrow{AP} \times \overrightarrow{AB}$ is positive, and this check gives distance between line $AB$ and point $P$ that can be used as Barycentric Coordinates) We should not interpolate in screen space, but in 3D space. How to solve this problem?

Perspective Correct Interpolation: we interpolate attribute $\phi$

1. compute depth $z$ at each vertex
2. interpolate $\frac{1}{z}$ and $\frac{\phi}{z}$ using 2D barycentric coordinates to give perspective
3. divide interpolated $\frac{\phi}{z}$ by interpolated $\frac{1}{z}$

### Texture Mapping

Texture Mapping

• for color

• for wetness attribute

• for shinny

• for normal map

• for displacement mapping

• for baked ambient occlusion

• for reflection bulb

• etc...

Given a model with UV:

• for each pixel in rasterized image (screen space)
• interpolate $(u, v)$ coordinates accross triangle
• sample texture at interpolated $(u, v)$
• set color of fragment to sampled texture value Sampled Texture Space might be warpped, it is hard to avoid aliasing

Magnification: camera too close to object

• Problem: single pixel on screen maps to less than a pixel in texture

• Solution: interpolate value at pixel center

Minification: camera too far to object

• Problem: single pixel on screen maps to large region in texture

• Solution: need to compute texture average over pixel (bug averaging kills performance)

• Prefiltering: compute average in build time, not run time, we down sample texture in build time for multiple regions

### Mip Map

Mip Map: a specific prefiltering technique MIP map (L.Williams 83) store prefiltered image at every possible scale Calculating MIP Map Level by estimating region cover using partial derivative

1. from screen space $(x, y)$ to barycentric coordinates
2. using barycentric coordinates to interpolate $(u, v)$ stored in vertices
3. approximate $\frac{du}{dx}, \frac{du}{dy}, \frac{dv}{dx}, \frac{dv}{dy}$ by taking differences of screen-adjacent samples and compute mip map level $d$
4. convert normalized texture coordinate $(u, v) \in [0, 1]$ to pixel locations in texture image $(U, V) \in [W, H]$.
7. tri-linear interpolation according to $(U, V, d)$