BRDF (Bidirectional reflectance distribution function)[4] is a simplified BSSRDF, assuming that light enters and leaves at the same point (see the image on the right).
BTDF (Bidirectional transmittance distribution function)[1] is similar to BRDF but for the opposite side of the surface. (see the top image).
BDF (Bidirectional distribution function) is collectively defined by BRDF and BTDF.
BSSRDF (Bidirectional scattering-surface reflectance distribution function or Bidirectional surface scattering RDF)[4][5] describes the relation between outgoing radiance and the incident flux, including the phenomena like subsurface scattering (SSS). The BSSRDF describes how light is transported between any two rays that hit a surface.
BSSTDF (Bidirectional scattering-surface transmittance distribution function) is like BTDF but with subsurface scattering.
BSSDF (Bidirectional scattering-surface distribution function) is collectively defined by BSSTDF and BSSRDF. Also known as BSDF (Bidirectional scattering distribution function).
Rendering Equation is talked about in Spherical Harmonic Lighting. I will not repeat it here. Please refer to that section. This ZhiHu Article also explains Rendering Equation.
Radiance: the light that is coming out from an object to the viewer. Irradiance (in-coming radiance): the light that is coming in from outside to an object
Simple light hacking: environment map + ambient light + directional/point/cone = result
ambient light: low-frequency of irradiance sphere distribution
environment map: high-frequency of irradiance sphere distribution
directional/point/cone: additional
Sometimes people use HDRI (High Dynamic Range Image) to refer to environment map. You should know that HDR is a standard of color display while environment map is a texture.
Here is a interactive dive into Phong Shading. Essentially, Phone is a approximation of BRDF.
BRDF Properties:
Non-Negativity: Should not produce negative radiant f_r(v, l) \geq 0 \forall v, l \in \Omega
Helmholtz Reciprocity: incoming (light) and outgoing (view) directions can be swapped
Energy Conservation: cannot generate energy \int_{\Omega_h} (v, l) \cos(\theta) dw \leq 1
There exists many ways to calculate shadow, Shadow Map is one of the very basic method, but is proved to be good enough for game industry over many years.
Shadow Map is explained and implemented in my post Writing Minecraft Shader, so I will not repeat here.
The issue is sampling rate. Although we can do z-clipping, but with only shadow map, you will see the shadow between object is not perfectly aligned from the foot where the object and the ground meet. This effect is prevalent when the sun angle is large. The solution to this is to transition to another shadow algorithm during sunset.
Instead of specifying the constant ambient, we could use a The global illumination can be pre-calculated. Specifically, we can use spherical harmonic coefficient to store pre-rendered radiant map therefore avoiding integration.
Lightmap is a texture (not model's texture, but the texture of assembled world) where each pixel correspond to irradiance (in spherical harmonics coefficient) of that pixel.
Because of separation of frequency, we can abuse compression methods for HDR color space output to store RGBA with 12 SH coefficients in 32 bits (the same space as one color RGBA8)
Lightmap: trade off between space and time
efficient at run time with fine details
long and expensive computation (requires light farm)
only handels static scene and light (there are hacks for dynamic scene but bad result)
storage intensive on hardrive and GPU
Since texture UV parameterization is hard, then we calculate irradiance of points floating in the air. To sample the irradiance of a point, we interpolate nearby probes for spherical harmonic coefficients.
For industry use, light probe can be generated based on geometry (which involves 3D manifold sampling).
Now, we can also do reflection probe, with lower sampling points but higher frequency.
In modern game engine, light probe can be generated realtime in dynamic scene. When there is scene or location change, we update light probe once every few seconds. Sometimes we can defer the update in frames that take less computation to reduce frame rate fluctuations.
In NPBR section above, we introduced Blinn Phong as one BRDF model. However, in PBR section, there are more realistic BRDF models.
Microfacet Theory: the reason we see blurry or reflection is because our specified normal is only an average of many micro-normals.
In special case when the microfacet is discrete (ie. the surface of the manifold is not differentiable, but look more like piecewise function), it is discrete microfacet model. However, in this section, we focus on continuous case.
Cook-Torrance BRDF is one BRDF model that utilizes the Microfacet Theory.
In above equation - w_i denotes irradiance angle - w_o denotes radiance angle - n denotes surface macro-normal (specified by triangulated model) - h denotes surface micro-normal (specified by material)
Note that for metals, the electrons can catch light and therefore some frequency of the light can never get out. For non-metals, the light will bounce within molecules and finally get out.
We first look at the term f_{Lambert} = \frac{c}{\pi}
Suppose the irradiance and BRDF are both uniform constant, we have the rendering equation as follow:
Our goal is to find energy conservation such that L_o \leq L_i, therefore we have to set f_r \leq \frac{1}{\pi}. In this case, we make energy absorption into a variable, we have
Above Lambert term characterizes diffuse, other terms characterize specular.
It is the distribution of the micro-normal vector at a single point.
We usually use GGX Distribution as Normal Distribution Function.
// roughness is the variability of the distribution
float D_GGX(float NoH, float roughness) {
float a2 = roughness * roughness;
float f = (NoH * NoH) * (a2 - 1.0) + 1.0;
return a2 / (PI * f * f);
}
// TODO: more
Captures how much of outgoing light is blocked by microfacets.
float GGX(float NdotV, float k) {
return NdotV / (NdotV * (1.0 - k) + k);
}
float G_Smith(float NdotV, float NdotL, float roughness) {
float k = pow(roughness + 1.0, 2.0) / 8.0;
return GGX(NdotL, k) * GGX(NdotV, k);
}
// TODO: more
Captures reflectivity by viewing angle.
float F_Schlick(float VoH, float f0) {
float f = pow(1.0 - VoH, 5.0); // where 5.0 comes out from prove
return f0 + (1.0 - f0) * f;
}
// TODO: more
We capture diffuse, specular, glossiness each using a texture map.
// TODO: code and theory for Specular Glossiness
One problem with Specular Glossiness Model is that the stupid artists can't set the Fresnel term correctly. To solve this issue, we wrap another layer to restrict value of Fresnel, and call that model Metallic Roughness Model
struct MetallicRoughness {
float3 base_color;
float3 normal;
float roughness;
float metallic;
};
When metallic is low (therefore non-metal), the specular doesn't contain color of base_color
. When metallic is high, base_color
will be used to calculate Fresnel. Because of this, we eliminated storing RGB
for specular, replaced that with only A
channel for metallic, reducing memory usage.
SpecularGlossiness ConvertMetallicRoughnessToSpecularGlossiness(MetallicRoughness metallic_roughness) {
float3 base_color = metallic_roughness.base_color;
float roughness = metallic_roughness.roughness;
float metallic = metallic_roughness.metallic;
float3 dielectricSpecularColor = float3(0.08f * dielectricSpecular);
float3 specular = lerp(dielectricSpecularColor, base_color, metallic);
float3 diffuse = base_color - base_color * metallic;
SpecularGlossiness specular_glossiness;
specular_glossiness.specular = specular;
specular_glossiness.diffuse = diffuse;
specular_glossiness.glossiness = 1.0f - roughness;
return specular_glossiness;
}
One artifact created by Metallic Roughness model is a undesirable white edge at the transition between metallic and non-metallic area, especially at lower resolution.
// TODO: GAMES201/GAMES202
// TODO
Idea: since we the sampling rate is lower for far away object in our screen, we adjust the sampling rate of shadow map to match our viewing sampling rate.
Challenge:
You need to blend between different levels of shadow map (cascade level), otherwise you will see breaks. Solving this relates to dirty hack within shader code
We cannot do colored shadow typically for translucent surfaces
Rendering shadow map (with multiple levels) is expensive 4~5ms
out of total rendering time 30ms
There are problems can be solved with soft shadow
shadow aliasing
shadow is too hard
shadow calculation is too slow
Instead of taking the indicator variable of whether a pixel is in the shadow literally, we do an average so that result is continuous within [0, 1].
PCF is a technique to take some average of shadow map.
// TODO For the same object, we want the shadow that is far away from the object looks more blurry than shadow closer to the object, PCSS method is proposed to solve this by directly compute the averaging size of shadow map.
// TODO Averaging is computationally expensive, we can use VSSM to reduce such computation.
Summary of AAA Rendering
For Irradiance Lightmap + Light Probe
For Texture: PBR(SG, MR) + IBL
For Shadow: Cascade Shadow + VSSM
For Environment Light: image-based lighting
The above material is widely used in AAA games around 2010 to 2015. However, as Graphics Cards now support Ray-Tracing, we will likely to see light map and light probe be deprecated. The problem with global illumination will likely be solved. For example, techniques like A Practical Guide to Global Illumination using Photon Maps or Global Illumination using Photon Maps can be used.
Advancements
More flexible compute models
High performance parallel architecture
Fully opened graphics API
We will talk about lumen GI that consists of 4 different algorithms.
Virtual Shadow Maps: we hash calculated shadow map into a virtual shadow map. If hash result generate cache miss, we re-calculate shadow map and cache it into hash. Otherwise we can just re-use the shadow map we stored.
Shader Management:
Shader Variants: for every platform, for every branch divergence in the shader code, we generate one shader variant to reduce branching in GPU execution
Uber Shader: the main shader artists write with many branches
Cloud Rendering: Many calculation isn't view-dependent. Therefore cloud computing can be very cheap as it is amortized to the number of users in the same scene.
Table of Content