Rendering Pipeline In Unity

Rendering Pipeline In Unity

Introduction to Rendering Pipeline in Unity


Rendering Vs Drawing

Rendering

Rendering is the process of drawing a scene on the computer screen.


Rendering involves a combination of geometry calculations, textures, surface treatments, the viewer’s perspective, and lighting.





The rendering pipeline is the process of transforming all of these data into the virtual environment on the screen.



Rendering Vs Drawing, Source: Wikipedia







Rendering Pipeline consists of:

Application (CPU work happening in the background)


CPU Input to the GPU, where Geometry and Rasterization will take place, Source: Wikipedia

CPU Input to the GPU, where Geometry and Rasterization will take place, Source: Wikipedia

In the Application Phase, you are giving instructions for handling Geometry and Rasterization. This phase affects Geometry and Rasterization.

for example, triggering post-processing effects to signal a collision will have Rasterization.

Geometry

Creating 3d and 2d models using the mesh data and simulating the virtual world with its dimensions.

Creation of Models using Mesh Data, Source: Wikipedia

Creation of Models using Mesh Data, Source: Wikipedia

It includes calculations about the position of the camera and transformation, rotation and scaling of the virtual world.

Rasterization or (Rasterisation)

Since our screens are 2D, Rasterization is how the Geometry (3D & 2D) will be drawn on our 2D screen.

Rasterization is converting an image from vector graphics format to a raster image(pixeled image), Source: ShichenLiu

Rasterization is converting an image from vector graphics format to a raster image(pixeled image), Source: ShichenLiu

Rasterization is where we process the virtual environment several times through different filters then output the result on the screen.

Post-processing effects: Effects that happen to the 2d version of the objects (Screen model) not the 3d model (Geometry Model)

Rendering Processes:

  1. Geometry

Mesh data are collected (Vertices Array, Normals Array, Triangle Array and UV Array)

No alt text provided for this image

Stanford Bunny Mesh, Source: http://watkins.cs.queensu.ca/

2.Illumination (Models are colored and lit)

In the Illumination stage, we add lighting effects to the virtual world

Using different lighting models we lit our objects


Using different inputs(textures, normal maps … ,etc.) we color objects in the virtual world.

Stanford Bunny Texture, Source: ALICE project-team

Stanford Bunny Texture, Source: ALICE project-team

Using Uv arrays from Mesh Data to map a texture on the Stanford Bunny, Source: Reddit

Using Uv arrays from Mesh Data to map a texture on the Stanford Bunny, Source: Reddit


3. Viewer’s Perspective (Camera Input)

Before rendering the environment on the screen we consider the camera input such as (field of view, Projection Mode[Orthographic or Prespective])

Before rendering the environment on the screen we consider the camera input such as (field of view, Projection Mode[Orthographic or Prespective])


4.Clipping

(Remove objects outside of the Camera View)

No alt text provided for this image
Screen Space Projection using clipping planes, Source: LearnOpenG

Screen Space Projection using clipping planes, Source: LearnOpenGL

5. Screen space Projection

(Projection of 3d Environment on the 2d screen model)

Screen Space Projection Source: Scratchapixel

Source: Scratchapixel

6.Adding post-processing effects

Those are the effects we add to the 2D image just before Displaying the final output on the screen

Depth of field is a post-processing effect applied to the 2D final image, Source: Bart Wronski

Depth of field is a post-processing effect applied to the 2D final image, Source: Bart Wronski

Bloom is a post-processing effect applied to the 2D final image, Source: GamersNexus

Bloom is a post-processing effect applied to the 2D final image, Source: GamersNexus

7. Display

Finally, we renderer our scene to the screen.

This was a macroscopic look of the rendering line processes in Unity, it provides an inside perspective about the rendering pipeline and the work done by lots of magnificent engineers to help us create better apps and tools.

To view or add a comment, sign in

More articles by Ahmed Schrute

  • Genetic Evolution (Behavioral Traits) Simulation using Unity3D

    This is a simulation of Genetic Evolution with a focus on Behavioral traits passed between generation, In this…

  • Vertex Fragment Shader Structure

    This article is a continuum of shader coding articles using Shader lab in Unity, I highly recommend you visit my two…

  • Rendering Queues in Unity

    A lot of times, we need to render certain objects(Player, Gem …etc) to the front even there is an object overlaying our…

  • Normal Mapping

    If you have read the Lighting in Unity 1 and Lighting in Unity2, you have seen how important normal vectors are for…

    2 Comments
  • Light in Unity 2

    Intro to Blinn Phong Lighting model In the previous article, We have been through Lambert Lighting Model and how it…

  • Light in Unity 1

    Introduction to Lighting in Unity and Lambert Lighting Lighting in Computer Graphics is all about calculating the Pixel…

  • Shader Variables Data Types in Unity

    Why we use different variable types when writing shaders? Shader code is for every pixel(or vertex), therefor efficient…

  • Understanding Mesh Anatomy in Unity

    In order to shade a model we need to understand its geometric structure Let’s take a cube as an example of a mesh model…

  • Unity CPU Profiling

    Enhancing unity CPU performance I usually have a routine before I start inspecting the CPU usage tap which is opening…

Explore content categories