Video
We continue this series by looking at how to configure the WebGPU rendering pipeline to render our WebGPU triangle:
Canāt Wait For The Series To End?
If you would like to move ahead without waiting for the next video in the series, I recommend scrolling down to the code below, or checking out my Rendering a Triangle in WebGPU article.
The WebGPU Triangle Video Series Code
During this series, we will study this code which you can find in this article where we begin the series.
Before being able to render this code, you need to download a browser that is able to run WebGPU code
The Rendering Pipeline of WebGPU
A rendering pipeline is represented as a complete function performed by a combination of GPU hardware, the underlying drivers, and the user agent.
The purpose of a rendering pipeline is to process input data, such as vertices, and to produce output such as colors on our screen.
The WebGPU docs informs us that a WebGPU rendering pipeline consists of the following steps in this particular order:
- Vertex fetch, controlled by
GPUVertexState.buffers
- Vertex shader, controlled by
GPUVertexState
- Primitive assembly, controlled by
GPUPrimitiveState
- Rasterization, controlled by
GPUPrimitiveState
,GPUDepthStencilState
, andGPUMultisampleState
- Fragment shader, controlled by
GPUFragmentState
- Stencil test and operation, controlled by
GPUDepthStencilState
- Depth test and write, controlled by
GPUDepthStencilState
- Output merging, controlled by
GPUFragmentState.targets
.
Weāve already have encountered some of these steps over the past couple articles - without even realizing it !
Weāve also already defined our vertex and fragment shaders.
The next step will be to bring all of these different bits together into a coherent render pipeline configuration of type GPURenderPipeline
. Letās do that now!
Configuring the WebGPU Rendering Pipeline
Before moving onto the code, thereās one more important point that I want to mentionā¦
Not every part of the rendering pipeline is configurable.
Programmable Stages
Stages in the pipeline that we can configure are considered programmable.
Some examples of programmable stages would be the contents of the vertex and fragment shader, as weāre responsible for writing the code for each of these shader types.
Fixed Stages
Stages that in the pipeline that are not configurable are considered fixed.
An example of fixed stages would be the processing that a vertex undergoes before rasterization.
You can visit this link to learn more about the different steps of the pipeline.
The code
As expected, we need to use the GPUDevice
, device
, in order to communicate with our GPU.
In order to configure our rendering pipeline, the method we need to call on our device
is createRenderPipeline()
.
const pipeline = device.createRenderPipeline({
layout: "auto",
vertex: {
module: shaderModule,
entryPoint: "vertex_main",
buffers: vertexBuffersDescriptors,
},
fragment: {
module: shaderModule,
entryPoint: "fragment_main",
targets: [
{
format: presentationFormat,
},
],
},
primitive: {
topology: "triangle-list",
},
});
We will see how to configure our pipeline in the following sections!
layout
The layout
field defines the configuration of our WebGPU pipeline.
We are using a value of auto
there, which means we want to use a default configuration for our pipeline.
Be careful with the auto
value, because (the WebGPU docs)[https://www.w3.org/TR/webgpu/#pipeline-base] mentions that a rendering pipeline layout configured with auto
is not recommended when a more complex configuration is needed.
For now, to render just a triangle, auto
works well enough.
vertex, GPUVertexState
Letās look at the vertex
field, which is of type GPUVertexState
::
vertex: {
module: shaderModule,
entryPoint: "vertex_main",
buffers: vertexBuffersDescriptors,
},
module
The module
field contains our shader configuration.
Weāve already defined the shaderModule
value in a previous post where we discussed vertex and fragment shaders, but Iāll include it so you donāt have to follow the link:
const shaderModule = device.createShaderModule({
code: `
struct VertexOut {
@builtin(position) position : vec4<f32>,
@location(0) color : vec4<f32>,
};
@vertex
fn vertex_main(@location(0) position: vec4<f32>,
@location(1) color: vec4<f32>) -> VertexOut
{
var output : VertexOut;
output.position = position;
output.color = color;
return output;
}
@fragment
fn fragment_main(fragData: VertexOut) -> @location(0) vec4<f32>
{
return fragData.color;
}
`,
});
entryPoint
The entryPoint
field is the function name that we use in our vertex shader definition..
Therefore, we put vertex_main
as the value, because it matches the function that we already defined in our vertex shader definition.
buffers
The buffers
field expects vertex buffer descriptors.
Recall that we have already defined a descriptor, vertexBuffersDescriptors
, as follows:
const vertexBuffersDescriptors = [
{
attributes: [
{
// Position
shaderLocation: 0,
offset: 0,
format: "float32x4",
},
{
// Color
shaderLocation: 1,
offset: 16,
format: "float32x4",
},
],
arrayStride: 32,
stepMode: "vertex", // https://www.w3.org/TR/webgpu/#enumdef-gpuvertexstepmode
},
];
fragment, GPUFragmentState
Next, letās look at the fragment
field, which is of type GPUFragmentState
:
fragment: {
module: shaderModule,
entryPoint: "fragment_main",
targets: [
{
format: presentationFormat,
},
],
},
module
We already talked about this field for the vertex
field. Again, weāll just use the shaderModule
variable.
entryPoint
Equal to the configuration of the vertex
field, weāll use the value fragment_main
for this field, because it matches the name that we have already defined in our definition of the fragment shader.
targets
The targets
field holds a list of colorState
s.
In short, a target is just an image that we want to render onto.
In this project, we only have one target, our canvas element.
We configure this by appending an object to the targets
list.
Then, we set the objectās format
field with the value presentationFormat
ā¦ which is the preferred format of our canvas
element.
primitive, GPUPrimitiveState
The primitive
field, which is of type GPUPrimitiveState
, defines the type of the primitive we want to render.
For the triangle, we can just use the value triangle-list
, because we want to render trianglesā¦ or in our case, a single triangle.
The Code for Part 6
You can find the code for this part in this GitHub Gist.
Next Time
Weāll move forward with this project and aim to cover some more topics as we render a WebGPU Triangle together.