Video
We continue this series by looking at how to configure the vertex shader and the fragment shader of our WebGPU Triangle.
Canât Wait For The Series To End?
If you would like to move ahead without waiting for the next video in the series, I recommend scrolling down to the code below, or checking out my Rendering a Triangle in WebGPU article.
The WebGPU Triangle Video Series Code
During this series, we will study this code which you can find in this article where we begin the series.
Before being able to render this code, you need to download a browser that is able to run WebGPU code
What is a shader?
A shader is a program that we can code that runs on a GPU.
In this article, weâll be programming two types of GPU shaders:
- Vertex shader - A program that runs on the GPU for each vertex. It returns the position of the vertex in 3D space. It can also pass data to the fragment shader as well.
- Fragment shader - A program that runs on the GPU for each fragment (pixel) between our vertices. It needs to return the color of the fragment (pixel) of interest.
The Fragment and Vertex Shaders
Recall that in a previous article, we defined and passed an array of vertices to the GPU by pushing them to a GPU buffer.
Once the GPU is ready to render an image to our screen, it reads in and analyzes each vertex through executing a vertex shader.
The Vertex Shader
The vertex shader is a program that executes itself on the GPU. It is ran to process each vertex in a 3D scene.
The goal of a vertex shader is to return a position in 3D space of the vertex that itâs currently processing.
Unlike most things in the rendering pipeline, this step must be configured with our own code (which weâll see soon)!
Code Example
Here is an example of the simplest vertex shader that can be written :
@vertex
fn vertex_main() -> vec4<f32>
{
return vec4(1.0, 0, 0, 1.0);
}
By returning vec4(1.0, 0, 0, 1.0)
, we set the position of this particular vertex at this position in 3D space. This will be sent down the rendering pipeline for further processing.
After the vertex shader has processed the vertices, the position data and other data (weâll see this soon) is processed by other âhiddenâ steps in the rendering pipeline.
Eventually, weâll want to âcolor inâ our image. This is the job of the fragment shader.
The Fragment Shader
The fragment shader is another program that runs on the GPU that returns a color value for each fragment (just think pixel for now) that is going to be rendered in our image.
The goal of a fragment shader is to return a color for the fragment (pixel) that itâs currently processing.
Code Example
Here is an example of a fragment shader where each pixel is colored red:
@fragment
fn fragment_main() -> @location(0) vec4<f32>
{
return vec4(1.0, 0, 0, 1.0); // Red (RGBA)
}
How many times does each shader execute on the GPU per frame?
Take for example our triangle that we want to render:
For our triangle, the vertex shader will run three times per frame. Iâve marked the three vertices in pink, to be clear.
The fragment shader is executed for every pixel in between the vertices of our WebGPU Triangle for each frame.
As you can see, the fragment shader is generally executed much more times than the vertex shader!
The Triangle Shaders
Here is the code we will use to render a triangle:
const shaderModule = device.createShaderModule({
code: `
struct VertexOut {
@builtin(position) position : vec4<f32>,
@location(0) color : vec4<f32>,
};
@vertex
fn vertex_main(@location(0) position: vec4<f32>,
@location(1) color: vec4<f32>) -> VertexOut
{
var output : VertexOut;
output.position = position;
output.color = color;
return output;
}
@fragment
fn fragment_main(fragData: VertexOut) -> @location(0) vec4<f32>
{
return fragData.color;
}
`,
});
As we should be familiar with by now, we need to use the GPUDevice, device
, in order to communicate with the GPU.
You can see that we use device.createShaderModule()
to register the shaders.
The WebGPU Vertex Shader Code
Now, letâs take a deeper dive into the vertex shader code to see what each line is doing:
struct VertexOut {
@builtin(position) position : vec4<f32>,
@location(0) color : vec4<f32>,
};
@vertex
fn vertex_main(@location(0) position: vec4<f32>,
@location(1) color: vec4<f32>) -> VertexOut
{
var output : VertexOut;
output.position = position;
output.color = color;
return output;
}
Whatâs the struct keyword in WebGPU?
First, we see VertexOut
. We can use this struct
to organize our vertex shader outputs.
struct VertexOut {
@builtin(position) position : vec4<f32>,
@location(0) color : vec4<f32>,
};
(To learn more about WebGPU structs, visit this link.)
Whatâs the @builtin WebGPU keyword?
When used as an output from the vertex shader, the @builtin
keyword can be leveraged in order to pass crucial shader information along to later steps of the rendering pipeline.
The information that can be passed to later steps in the rendering pipeline using the @builtin
keyword are pre-defined by WebGPU itself.
Letâs take a look at the following line of code that exists in our triangle app:
@builtin(position) position : vec4<f32>
Within the parenthesis of @builtin()
, we see position
. If you take a look at the docs, youâll see that the position
keyword in WebGPU corresponds to the following description:
Output position of the current vertex, using homogeneous coordinates.
Why is it necessary that we set the position using this @builtin
value?
Because the vertex shaderâs main job is to return the position of the given vertex that itâs processing. The position should also be vec4<f32>
.
If youâre coming from WebGL, itâs equal to setting gl_Position
.
Learn more about the WebGPU @builtin
keyword here
What is the @location keyword in WebGPU?
The @location
keyword is responsible for storing data in a specified location in memory.
For example, we define a color
variable in the struct VertexOut
like so:
@location(0) color: vec4<f32>
0
is the location of the Input/Output location (IO location).
It is necessary to define the IO location so we know where to find this data when accessing it in fragment shader.
Here is another code example from the WebGPU docs.
Learn more about the @location
keyword here
What is the @vertex keyword in WebGPU?
The @vertex
keyword indicates that the function defined under this keyword is the entry point of a vertex shader. Weâll see what this means in the next section.
The Rest of the WebGPU Vertex Shader Code
Now letâs take a look at what the vertex shader is doing:
@vertex
fn vertex_main(@location(0) position: vec4<f32>,
@location(1) color: vec4<f32>) -> VertexOut
{
var output : VertexOut;
output.position = position;
output.color = color;
return output;
}
First, we define the function, fn vertex_main()
.
If we look at the function a little closer, weâll see two parameters @location(0) position: vec4<f32>
and @location(1) color: vec4<f32>
.
This is important! Recall that a week ago we defined vertexBufferDescriptors
where we set the shaderLocation
of the position
to 0
and the color
to 1
. Weâre now able to access this data in the vertex shader at these shader locations!
In addition, we define the type of the output, output
, as a struct VertexOut
.
As expected, in the body of the function, we fill in the fields of this struct like so:
var output : VertexOut;
output.position = position;
output.color = color;
return output;
Every field in our output
struct, except position
because it is of the @builtin
type, will be passed to the fragment shader and made available.
Therefore, in the next step, weâll be able to use the color
data to color our pixels in the fragment shader!
The WebGPU Fragment Shader Code
Finally, we define the fragment shader by using the @fragment
keyword:
@fragment
fn fragment_main(fragData: VertexOut) -> @location(0) vec4<f32>
{
return fragData.color;
}
Note that we are returning a color, fragData.color
at the end of the function. A fragment shader must return a color.
The Code for Part 5
You can find the code for this part in this GitHub Gist.
Next Time
Weâll move forward with this project and aim to cover some more topics as we render a WebGPU Triangle together.