RenderGround

This slideshow requires JavaScript.

Overview

RenderGround is a playground of implementing various rendering techniques to improve the renderer of my personal engine.

Info

  • Role: Programmer
  • Engine: Personal engine
  • Language/Tool: C++, OpenGL, GLSL
  • Dev Time: 2 months

Features

  • Physically based rendering with direct lighting and image-based lighting based on Cook-Torrance BRDF
    • GGX distribution and geometry function and Fresnel-Schlick approximation to support BRDF
    • Monte Carlo sampling with the low-discrepancy sequence using importance sampling for convolution
  • Post-processing effects (kernels, bloom, etc)
  • Instance rendering
  • Directional and point (omnidirectional) shadows
  • HDR rendering 
  • Gamma correction
  • SSAO under deferred shading
  • MSAA in the forward rendering path

PBR

Concepts

The Cook-Torrance based reflectance equation fully expanded is:

render equation

Observing the equation we come to two essential components of the system:

  • Radiance, or L(p,ω), defining the contribution of incoming light in some particular direction to the output
  • Bidirectional directional distribution function (BRDF), or DFG/4(ωon)(ωin), defining how the material of the micro surface affects the reflection of the incoming light

From now, radiance is denoted L and BRDF is denoted f.

There are two situations where this equation is applied. The direct lighting and image-based lighting. I will delve into both below.

Direct Light Radiance

Direct light is easier because we know the direction of the light. This makes the L part simple:

vec3 IndividualRadiance(vec3 lightPos, vec3 subjectPos, vec3 lightColor, vec3 viewDir)
{
    vec3 lightDisplacement = lightPos - subjectPos;
    vec3 lightDir = normalize(lightDisplacement);
    vec3 halfway = normaize(viewDir + lightDir);
  
    float attenuation = ComputeAttenuation(length(lightDisplacement));
    return (lightColor * attenuation);
}

For f, we need to understand what the N, D and F are exactly.

Normal Distribution Function

  • N, or normal distribution function, is to measure how many of the micro surfaces have normal that matches the halfway vector of l and v exactly.
  • This relates to the roughness, because a rougher surface means fewer such normals
  • I used the Trowbridge-Reitz GGX for N

NDF

Geometry Function

  • G, or geometry function, considers the occlusion of light due to the geometry feature of the micro surfaces as well as the incident light angle
  • I used the Schlick-GGX model for G
  • To consider both occlusion and shadowing, the Smith’s method is used to combine equations
GEO

Fresnel Equation

  • The Fresnel equation improves the realistic level of the model with ratio between light reflected and refracted
  • I used the Fresnel-Schlick equation to approximate the model
Fresnel

Direct Light BRDF

Now that each function of BRDF is known we can compute the overall PBR lighting based on a direct light:

To illustrate, I arranged four lights in the scene with the color white, red, green and blue. Note how the lights are reflected on models.

Image Based lighting

Image Based Lighting is an indirect lighting technique. It treats the environment from an image as the light source and allows us to reflect that environment on materials.

Now we model from in various directions. The problem is computing from one direction is a costly operation already, so modeling with the same equations in the fragment shader for every single direction is not feasible.

The counter is to pre-process, instead of computing from frame to frame. This falls into three methods:

  • Convolution over the cube map for diffuse irradiance
  • Convolution over the cube map with mipmap levels for pre-filtered environment map used for the specular part
  • Pre-compute the BRDF into a texture used for the specular part

So it is mainly two convolutions and a form of texture. Let’s take a look.

Convolution

First, let’s look at convolution on diffuse map. The concept is simple: we take finite samples to approximate the diffuse irradiance. Since the number is finite, we can now use summation instead of the integration:

In code, this is:

vec3 irradiance = vec3(0.0);
int num_sample = 0;

for (float hori = 0.0; hori < 2.0 * PI; hori += hori_step)
{
    for (float vert = 0.0; vert < 0.5 * PI; vert += vert_step)
    {
        // get sample location in world space
        vec3 tangent_pos = vec3(sin(vert) * cos(hori), sin(vert) * sin(hori), cos(vert));
        vec3 world_pos = TangentToWorld(tangent_pos);

        // get sample irradiance from cube map
        irradiance += texture(env_map, world_pos).rgb * sin(vert) * cos(vert);

        num_sample++;
    }
}

float fl_num = float(num_sample);    // constructor
irradiance = irradiance * (PI / fl_num);

Drawing the irradiance map we get:

You can see the “blurred” map in the background.


Separating from diffuse, we have the same problem for specular part. With a split sum approximation from Epic Games, we split the equation into two integrations:

We are trying to solve in the first integration above. This time we take samples differently because we are only interested in reflections within the specular lobe. To do this I used biased Monte Carlo method with importance sampling:

With this, generating pre-filtered map is easy. It follows:

  • Generate a sample seed
  • Generate a sample vector in world space with that seed
  • Generate pre-filtered color of the environment map with the sample vector

In code, this is:

float weight;
vec3 filteredRGB = vec3(0.0);

for (int i = 0; i < NUM_SAMPLE; ++i)
{
    vec2 seed = Hammersley_SeedGen(i, NUM_SAMPLE);
    vec3 halfway = ImportanceSample(seed, normal, roughness);
    vec3 filter_v = normalize(dot(v, halfway) * 2.0 * halfway - v);

    float n_dot_f = max(dot(normal, filter_v), 0.0);
    if(n_dot_f > 0.0)
    {
        filteredRGB += (n_dot_f * texture(env_map, filter_v).rgb);
        weight += n_dot_f;
    }
}

filteredRGB = filteredRGB / weight;

The Hammersley_SeedGen generates seeds with the Hammersley Sequence, which is a blackbox in this case.

Finally, we generate these pre-filtered maps with different mipmap levels and feed in different roughness. This step is dependent on the graphics libraries one is using, but generally includes:

  • Use the filter shader
  • Bind texture to the environment map
  • Bind a frame buffer used to capture the filtered map
  • Render into that FBO with different mipmap levels

Now if I render the pre-filtered map, I get the similar blurred background as above. But I now have different “levels” of blur.

And this wraps up Convolution.

IBL Pre-computed BRDF

To pre-compute BRDF we store data into a lookup texture. Later in shaders, we can just extract info from this texture, saving lots of performances.

The equation goes:

Again, exact code of drawing to textures depends on graphics lib used, but generally follows:

  • Generate and bind texture placeholder, allocating memory for the texture
  • Set texture settings (wrapping, filters)
  • Bind a FBO in charge of the drawing
  • Let the FBO know which texture to draw to with texture handle
  • Use the pre-compute shader
  • Draw to the texture

This gives:

Look up texture

The LUT contains the scale and bias information subject to the base reflectivity of the material (F0 in the Fresnel equation).

All combined

With all above, we have a PBR system based on direct and indirect lighting with a reasonable performance thanks to pre-operations like convolutions and pre-computation of BRDF.

This slideshow requires JavaScript.

HDR Bloom

Concept

Using HDR rendering for Bloom makes the effect more controllable. Because Bloom requires a color buffer to record fragments considered as “bright”, HDR gives a wider color range than LDR does.

HDR FBO and MRT

Creating the FBO and multiple render targets (MRT) for it satisfy our needs for two color attachments. In OpenGL this is:

// fbo
uint hdr;
glGenFramebuffers(1, &hdr);
glBindFramebuffer(GL_FRAMEBUFFER, hdr);

// color attachment for fbo
uint color_attach[2];
glGenTextures(2, color_attach);
for (uint i = 0; i < 2; ++i)
    BindColorBuffer(i, color_attach[i]);

BindColorBuffer is just binding the color buffer to correct attachment index.

Two-pass Gaussian Blur

To blur the color buffer with “bright” colors, we use Gaussian Blur, which can be broken down into a horizontal blur and a vertical blur. If we use linear sampling, for a kernel of 8 by 8, we will have 64 samples. With the two-pass version, we have 16. This is a nice improvement once the sample number becomes large.

In code I used 10 horizontal blurs mingled with 10 vertical blurs:

bool horizontal = true;
for (uint i = 0; i < 20; ++i)
{
    SetUniform("horizontal", horizontal);

    glBindFramebuffer(GL_FRAMEBUFFER, doubleFBO[horizontal]);
    glBindTexture(GL_TEXTURE_2D, (i == 0) ? color_attach[1] : doubleCBO[!horizontal]);

    RenderQuad();

    horizontal = !horizontal;
}

With horizontal fed to shader I can then blur texel-wise in the correct direction.

Exposure 

I implemented the exposure factor into the shader to exhibit the dynamics HDR added to the Bloom effect:

Change of exposure and toggle on & off of Bloom

Omnidirectional Shadows

Omnidirectional shadows, rather than directional shadows, are the shadows formed by lights that could come from any directions. This happens when we use point lights.

The trick lies in the way we generate the transform of Light Space (part of the pipeline of using shadow maps to spawn shadows). In this case, we use perspective projection rather than orthographic projection.

The result is the shadow projected oblong:

Post Mortem

What went well

  • Learned a lot on various rendering techniques especially on lighting models
  • Had a Physics-Rendering two-way working pattern that I personally liked. Anytime I was driven to the corner by computational geometry trying to solve Physics problems I could change the battle and refresh my mind with the nice-looking PBR scene.
  • Had a lot more hands-on shader programming experience

What went wrong

  • Too many techniques, too little time. There are many other variations of functions used in different models (for example, the diffuse component of the PBR model) that I wish I had more time to implement.
  • Took too much time on reading materials and understanding concepts

Action Plan

  • Continue attempting on various solutions of a model
  • Write code when getting familiar with concepts and be willing to kill it later