# Lecture_007_Depth_and_Transparency

## Occlusion and Depth Buffer

Depth Buffer: we keep track the depth of the closest triangle seen so far 0. initialize depth for each pixel (super sample) to infinity

1. randomly select a not drawn triangle from buffer
2. for each pixel (super sample), draw that triangle if its depth value is smaller than stored value, don't draw if its depth value is larger than stored value
3. update new depth buffer if pixel (super sample) get drawn
4. repeat until all triangles are drawn

The above technique will work fine with super sample

Space: constant space per sample for depth buffer, don't depend on overlapping primitives

Time: constant time per covered sample

## Transparency and Alpha

### Non-Premultiplied Alpha

Over operator for non-premultiplied alpha: non-commutative blending of tinted glass

Given $A = (A_r, A_g, A_b)$ with alpha $\alpha_A$ and $B = (A_r, A_g, A_b)$ with alpha $\alpha_B$, to compute $B$ over $A$, we get: $C = \alpha_BB+ (1 - \alpha_B)\alpha_AA, \alpha_C = \alpha_B + (1 - \alpha_B)\alpha_A$

### Premultiplied Alpha

Premultiplied Alpha: compute $B$ over $A$

• $A' = (\alpha_A A_r, \alpha_A A_g, \alpha_A A_b, \alpha_A)$

• $B' = (\alpha_B B_r, \alpha_B B_g, \alpha_B B_b, \alpha_B)$

• $C' = B' + (1 - \alpha_B) A'$

• $(C_r, C_g, C_b, \alpha_C) \xrightarrow{\text{becomes}}(\frac{C_r}{\alpha_C}, \frac{C_g}{\alpha_C}, \frac{C_b}{\alpha_C})$

• This is exactly how we compose RGB value (we are expressing color in homogeneous coordinates)

Premultiplied alpha is closed under composition

Render mixture of opaque and transparent triangles

1. render all opaque primitives in any order
2. disable write to depth buffer, render semi-transparent triangles in back-to-front order. If depth test passed, triangle is over color buffer, otherwise don't draw. (we need to sort semi-transparent triangles and hope they don't intersect)

## Full Rasterization Pipeline

Steps:

1. Transform triangle vertices into camera-centered world space (inverse of camera transform)
2. Apply perspective projection into normalized coordinate space
3. Clipping: discard triangles lie outside (culling) and clip triangles to box (possibly generate new triangles)
4. Transform normalized coordinates into screen coordinates and to image coordinates
5. Pre-compute data in Barycentric Coordinates
6. Sample Coverage
7. Filtering, MIP Map, Sample Texture, Interpolation
8. Depth Test and Update Depth Value

Table of Content