The Rendering Pipeline | WebGPU | Video

We'll take an in-depth look into how to define our rendering pipeline in our Fun Triangle Project.

Keywords: WebGPU, rendering pipeline, real-time rendering, graphics, tutorial

By Carmen Cincotti Ā 

Video

We continue this series by looking at how to configure the WebGPU rendering pipeline to render our WebGPU triangle:

WebGPU Triangle

Canā€™t Wait For The Series To End?

If you would like to move ahead without waiting for the next video in the series, I recommend scrolling down to the code below, or checking out my Rendering a Triangle in WebGPU article.

The WebGPU Triangle Video Series Code

During this series, we will study this code which you can find in this article where we begin the series.

Before being able to render this code, you need to download a browser that is able to run WebGPU code

The Rendering Pipeline of WebGPU

A rendering pipeline is represented as a complete function performed by a combination of GPU hardware, the underlying drivers, and the user agent.

The purpose of a rendering pipeline is to process input data, such as vertices, and to produce output such as colors on our screen.

The Rendering Pipeline

The WebGPU docs informs us that a WebGPU rendering pipeline consists of the following steps in this particular order:

  1. Vertex fetch, controlled by GPUVertexState.buffers
  2. Vertex shader, controlled by GPUVertexState
  3. Primitive assembly, controlled by GPUPrimitiveState
  4. Rasterization, controlled by GPUPrimitiveState, GPUDepthStencilState, and GPUMultisampleState
  5. Fragment shader, controlled by GPUFragmentState
  6. Stencil test and operation, controlled by GPUDepthStencilState
  7. Depth test and write, controlled by GPUDepthStencilState
  8. Output merging, controlled by GPUFragmentState.targets.

Weā€™ve already have encountered some of these steps over the past couple articles - without even realizing it !

Weā€™ve also already defined our vertex and fragment shaders.

The next step will be to bring all of these different bits together into a coherent render pipeline configuration of type GPURenderPipeline. Letā€™s do that now!

Configuring the WebGPU Rendering Pipeline

Before moving onto the code, thereā€™s one more important point that I want to mentionā€¦

Not every part of the rendering pipeline is configurable.

Programmable Stages

Stages in the pipeline that we can configure are considered programmable.

Some examples of programmable stages would be the contents of the vertex and fragment shader, as weā€™re responsible for writing the code for each of these shader types.

Fixed Stages

Stages that in the pipeline that are not configurable are considered fixed.

An example of fixed stages would be the processing that a vertex undergoes before rasterization.

You can visit this link to learn more about the different steps of the pipeline.

The code

As expected, we need to use the GPUDevice, device, in order to communicate with our GPU.

In order to configure our rendering pipeline, the method we need to call on our device is createRenderPipeline().

const pipeline = device.createRenderPipeline({ layout: "auto", vertex: { module: shaderModule, entryPoint: "vertex_main", buffers: vertexBuffersDescriptors, }, fragment: { module: shaderModule, entryPoint: "fragment_main", targets: [ { format: presentationFormat, }, ], }, primitive: { topology: "triangle-list", }, });

We will see how to configure our pipeline in the following sections!

layout

The layout field defines the configuration of our WebGPU pipeline.

We are using a value of auto there, which means we want to use a default configuration for our pipeline.

Be careful with the auto value, because (the WebGPU docs)[https://www.w3.org/TR/webgpu/#pipeline-base] mentions that a rendering pipeline layout configured with auto is not recommended when a more complex configuration is needed.

For now, to render just a triangle, auto works well enough.

vertex, GPUVertexState

Letā€™s look at the vertex field, which is of type GPUVertexState::

vertex: { module: shaderModule, entryPoint: "vertex_main", buffers: vertexBuffersDescriptors, },

module

The module field contains our shader configuration.

Weā€™ve already defined the shaderModule value in a previous post where we discussed vertex and fragment shaders, but Iā€™ll include it so you donā€™t have to follow the link:

const shaderModule = device.createShaderModule({ code: ` struct VertexOut { @builtin(position) position : vec4<f32>, @location(0) color : vec4<f32>, }; @vertex fn vertex_main(@location(0) position: vec4<f32>, @location(1) color: vec4<f32>) -> VertexOut { var output : VertexOut; output.position = position; output.color = color; return output; } @fragment fn fragment_main(fragData: VertexOut) -> @location(0) vec4<f32> { return fragData.color; } `, });

entryPoint

The entryPoint field is the function name that we use in our vertex shader definition..

Therefore, we put vertex_main as the value, because it matches the function that we already defined in our vertex shader definition.

buffers

The buffers field expects vertex buffer descriptors.

Recall that we have already defined a descriptor, vertexBuffersDescriptors, as follows:

const vertexBuffersDescriptors = [ { attributes: [ { // Position shaderLocation: 0, offset: 0, format: "float32x4", }, { // Color shaderLocation: 1, offset: 16, format: "float32x4", }, ], arrayStride: 32, stepMode: "vertex", // https://www.w3.org/TR/webgpu/#enumdef-gpuvertexstepmode }, ];

fragment, GPUFragmentState

Next, letā€™s look at the fragment field, which is of type GPUFragmentState:

fragment: { module: shaderModule, entryPoint: "fragment_main", targets: [ { format: presentationFormat, }, ], },

module

We already talked about this field for the vertex field. Again, weā€™ll just use the shaderModule variable.

entryPoint

Equal to the configuration of the vertex field, weā€™ll use the value fragment_main for this field, because it matches the name that we have already defined in our definition of the fragment shader.

targets

The targets field holds a list of colorStates.

In short, a target is just an image that we want to render onto.

In this project, we only have one target, our canvas element.

We configure this by appending an object to the targets list.

Then, we set the objectā€™s format field with the value presentationFormatā€¦ which is the preferred format of our canvas element.

primitive, GPUPrimitiveState

The primitive field, which is of type GPUPrimitiveState, defines the type of the primitive we want to render.

For the triangle, we can just use the value triangle-list, because we want to render trianglesā€¦ or in our case, a single triangle.

The Code for Part 6

You can find the code for this part in this GitHub Gist.

Next Time

Weā€™ll move forward with this project and aim to cover some more topics as we render a WebGPU Triangle together.

Resources


Comments for The Rendering Pipeline | WebGPU | Video



Written by Carmen Cincotti, computer graphics enthusiast, language learner, and improv actor currently living in San Francisco, CA. Ā Follow @CarmenCincotti

Contribute

Interested in contributing to Carmen's Graphics Blog? Click here for details!