I would like to end this story of transformations that a vertex undergoes as it travels through the render pipeline. Our final stop is 2D space. Letβs go!
The viewport transformation
We resume our discussion from NDC. At this moment, weβre still in 3D space.
How do we get to 2D space?
We need to transform our vertex from 3D NDC to 2D screen coordinates.
When initializing the Canvas, we are responsible for configuring its size. This size is used to convert our NDC coordinates to screen coordinates. Here is the illustration:
Pay attention to the position of the origin of each space in the illustration above:
-
NDC - The origin is at the center. Our NDC coordinates are normalized between the values ββ(-1, 1) in the
x
andy
axes. The value of the βzβ axis is between (0, 1). -
Screen coordinates - The origin is top left. The bottom right point is the defined size of our screen (
w
width,h
height).
The framebuffer
To draw our image to the screen, the WebGPU needs to write information about each pixel to a color buffer.
The color buffer, combined with other buffers that can be used to render a final image such as the depth buffer and the stencil buffer, are stored in GPU memory. This combination is called a framebuffer.
If we consider that our framebuffer contains a color buffer that stores color information of an area of pixels that are the size of our screen (or of a Canvas size arbitrarily defined by us), it would therefore be necessary to transform these normalized coordinates into those which correspond to the coordinates of our render target (screen / Canvas).
The transformation math
The x
and y
axes correspond to the defined size of the final image. Fortunately, the WebGPU docs provide us with this calculation:
The height (vp.h
) and the width (vp.w
) are the defined width and height of our viewport. The vp.x
and vp.y
positions are position offsets. Letβs suppose that we define them as 0
.
Currently the calculation to convert the z
axis is not defined in the documentation. However, Songho tells us that for OpenGL the calculation is as follows:
where f
is the position value of the far plane and n
is the position value of the near plane. See last weekβs article to learn more.
With these calculations, we can see that the mapping between NDC and screen coordinates is a linear relationship:
π‘ What happens to the z coordinate?
During rasterization, the color of a pixel that will be rendered on the screen is stored in the color buffer.
However, if one configures a depth test, the z
coordinate could be used to resolve the visibility, with the use of the depth buffer during the process called *merging * .
In short, the depth test ensures that the vertices/primitives closest to the camera are provisioned in the color buffer (rendered in the final image) and the others are discarded.
We will return to this subject when we discuss rasterization.
The WebGPU Code
A few weeks ago we saw some code to create a triangle with WebGPU. Recall that there is a step to configure both our CanvasContext and swap chain
const configureContext = (presentationFormat) => {
const devicePixelRatio = window.devicePixelRatio || 1;
const presentationSize = [
this._context.canvas.clientWidth * devicePixelRatio,
this._context.canvas.clientHeight * devicePixelRatio,
];
this._context.configure({
device: this._device,
format: presentationFormat,
size: presentationSize,
});
}
const presentationFormat = this._context.getPreferredFormat(this._adapter);
configureContext(presentationFormat)
It is our job to provide the final size of the image. This size is used when transforming NDC to screen coordinates.
It is up to us to redefine this value in case we change the size of the window.
Next time
I think itβs time to continue the fabric simulation adventure. Being equipped with this new knowledge, we are ready to implement a way to load an .obj file into our scene.