Transforming Coordinates From 3D to 2D | WebGPU

The grand finale of our story of transformations. We will see how to transform a 3D vertex from NDC into 2D screen coordinates, which will be used to render an image that makes us proud.

Keywords: WebGPU, the rendering pipeline, realtime rendering, transformations

By Carmen Cincotti Β 

I would like to end this story of transformations that a vertex undergoes as it travels through the render pipeline. Our final stop is 2D space. Let’s go!

The viewport transformation

We resume our discussion from NDC. At this moment, we’re still in 3D space.

How do we get to 2D space?

We need to transform our vertex from 3D NDC to 2D screen coordinates.

When initializing the Canvas, we are responsible for configuring its size. This size is used to convert our NDC coordinates to screen coordinates. Here is the illustration:

NDC to Screen Coordinates

Pay attention to the position of the origin of each space in the illustration above:

  • NDC - The origin is at the center. Our NDC coordinates are normalized between the values ​​(-1, 1) in the x and y axes. The value of the β€˜z’ axis is between (0, 1).

  • Screen coordinates - The origin is top left. The bottom right point is the defined size of our screen (w width, h height).

The framebuffer

To draw our image to the screen, the WebGPU needs to write information about each pixel to a color buffer.

The color buffer, combined with other buffers that can be used to render a final image such as the depth buffer and the stencil buffer, are stored in GPU memory. This combination is called a framebuffer.

If we consider that our framebuffer contains a color buffer that stores color information of an area of pixels that are the size of our screen (or of a Canvas size arbitrarily defined by us), it would therefore be necessary to transform these normalized coordinates into those which correspond to the coordinates of our render target (screen / Canvas).

The transformation math

The x and y axes correspond to the defined size of the final image. Fortunately, the WebGPU docs provide us with this calculation:

fbCoords(n).x=vp.x+0.5βˆ—(n.x+1)βˆ—vp.wfbCoords(n).y=vp.y+0.5βˆ—(n.y+1)βˆ—vp.h\textbf{fbCoords(n).x} = vp.x + 0.5 * (n.x + 1) * vp.w\\ \textbf{fbCoords(n).y} = vp.y + 0.5 * (n.y + 1) * vp.h\\

The height (vp.h) and the width (vp.w) are the defined width and height of our viewport. The vp.x and vp.y positions are position offsets. Let’s suppose that we define them as 0.

Currently the calculation to convert the z axis is not defined in the documentation. However, Songho tells us that for OpenGL the calculation is as follows:

fbCoords(n).z=0.5βˆ—(fβˆ’n)βˆ—z.y+0.5βˆ—(f+n)\textbf{fbCoords(n).z} = 0.5*(f-n)* z.y + 0.5*(f+n)

where f is the position value of the far plane and n is the position value of the near plane. See last week’s article to learn more.

With these calculations, we can see that the mapping between NDC and screen coordinates is a linear relationship:

x:βˆ’1β†’x,1β†’x+wy:βˆ’1β†’y,1β†’y+hz:0β†’n,1β†’f \textbf{x}: -1 \rightarrow x, 1 \rightarrow x + w \\ \textbf{y}: -1 \rightarrow y, 1 \rightarrow y + h \\ \textbf{z}: 0 \rightarrow n, 1 \rightarrow f \\

πŸ’‘ What happens to the z coordinate?

During rasterization, the color of a pixel that will be rendered on the screen is stored in the color buffer.

However, if one configures a depth test, the z coordinate could be used to resolve the visibility, with the use of the depth buffer during the process called *merging * .

In short, the depth test ensures that the vertices/primitives closest to the camera are provisioned in the color buffer (rendered in the final image) and the others are discarded.

We will return to this subject when we discuss rasterization.

The WebGPU Code

A few weeks ago we saw some code to create a triangle with WebGPU. Recall that there is a step to configure both our CanvasContext and swap chain

const configureContext = (presentationFormat) => { const devicePixelRatio = window.devicePixelRatio || 1; const presentationSize = [ this._context.canvas.clientWidth * devicePixelRatio, this._context.canvas.clientHeight * devicePixelRatio, ]; this._context.configure({ device: this._device, format: presentationFormat, size: presentationSize, }); } const presentationFormat = this._context.getPreferredFormat(this._adapter); configureContext(presentationFormat)

It is our job to provide the final size of the image. This size is used when transforming NDC to screen coordinates.

It is up to us to redefine this value in case we change the size of the window.

Next time

I think it’s time to continue the fabric simulation adventure. Being equipped with this new knowledge, we are ready to implement a way to load an .obj file into our scene.

Resources


Comments for Transforming Coordinates From 3D to 2D | WebGPU



Written by Carmen Cincotti, computer graphics enthusiast, language learner, and improv actor currently living in San Francisco, CA. Β Follow @CarmenCincotti

Contribute

Interested in contributing to Carmen's Graphics Blog? Click here for details!