owned this note
owned this note
Published
Linked with GitHub
# PSA for Chromium / Dawn WebGPU API updates 2020-10-19
Chromium's WebGPU implementation and Dawn's API try to closely follow changes to the WebGPU specification. When the WebGPU IDL changes, Chromium and Dawn will try to support both the "old" and the "new" version of the IDL at the same time so prototypes can be updated. In JavaScript, uses of the "old" path will result in a console warning, while when using Dawn directly, the "old" path will print a warning to stderr.
Note that all changes to Dawn's API make it closer to [`webgpu.h`](https://github.com/webgpu-native/webgpu-headers/blob/master/webgpu.h) that we hope will allow applications to target both [Dawn](https://dawn.googlesource.com/dawn), and [wgpu](https://github.com/gfx-rs/wgpu/) in native before being compiled in WASM. Emscripten will also be updated from the "old" to the "new" API but won't have the smooth transition since developers control which version of emscripten they use.
A couple weeks after an update like this one, the "old" version will be removed. This means that the "old" version of the items below will start being removed from Chromium/Dawn starting on 2020-11-02.
Previous PSAs:
- [PSA for Chromium / Dawn WebGPU API updates 2020-07-28](https://hackmd.io/szV68rOVQ56GYzPJMBko8A)
- [PSA for Chromium / Dawn WebGPU API updates 2020-04-28](https://hackmd.io/Et7xlhoaThmi8dEX_s-xSw)
## Breaking changes
### Index format in `setIndexBuffer`
WebGPU [PR1](https://github.com/gpuweb/gpuweb/pull/940) [PR2](https://github.com/gpuweb/gpuweb/pull/943)
Due to some difficult interactions between target APIs, the `primitiveTopology` was previously part of the `GPURenderPipelineDescriptor` and had to be provided for all render pipelines. This was inconvenient because all other GPU APIs give the index format in the call to set the index buffer.
Now the index format has to be provided inside the call to `setIndexBuffer`, and redundantly provided at render pipeline creation if and only if the primitive topology is a strip topology. When drawing with an index buffer, if the render pipeline uses a strip topology, the WebGPU implementation will validate that the index format of the pipeline and the index buffer matches.
In Javascript the following changes are needed:
```diff
```diff
const myListPipeline = device.createRenderPipeline({
vertexState: {
- indexFormat: 'uint32'
// ...
},
primitiveTopology: 'triangle-list'
// ...
});
const myStripPipeline = device.createRenderPipeline({
vertexState: {
indexFormat: 'uint32'
// ...
},
primitiveTopology: 'triangle-strip'
// ...
});
const pass = startRenderPassEncoder();
-pass.setIndexBuffer(myIndexBuffer);
+pass.setIndexBuffer(myIndexBuffer, 'uint32');
pass.setPipeline(myListPipeline);
pass.draw(6);
pass.setPipeline(myStripPipeline);
pass.draw(4);
```
Likewise when using Dawn's API, changes are needed:
```diff
wgpu::VertexStateDescriptor listVertexDesc;
-listVertexDesc.indexFormat = wgpu::IndexFormat::Uint32;
+// Or nothing, it is the default value.
+listVertexDesc.indexFormat = wgpu::IndexFormat::Undefined;
// Create listPipeline as usual.
wgpu::VertexStateDescriptor stripVertexDesc;
stripVertexDesc.indexFormat = wgpu::IndexFormat::Uint32;
wgpu::RenderPipelineDescriptor listPipelineDesc;
listPipelineDesc.vertexState = &listVertexDesc;
// Create stripPipeline as usual.
wgpu::RenderPassEncoder pass = StartRenderPassEncoder();
-pass.SetIndexBuffer(myIndexBuffer);
+pass.SetIndexBufferWithFormat(myIndexBuffer, wgpu::IndexFormat::Uint32);
pass.SetPipeline(myListPipeline);
pass.Draw(6);
pass.SetPipeline(myStripPipeline);
pass.Draw(4);
```
A future breaking change will require renaming `SetIndexBufferWithFormat` to `SetIndexBuffer`.
### Multisampled texture binding type
WebGPU [PR](https://github.com/gpuweb/gpuweb/pull/1005)
This changes the `GPUBindGroupLayout.multisampled` that could only be used with `type: 'sampled-texture'` to its own `GPUBindingType`, `multisampled-texture`. In Javascript the following changes are needed:
```diff
const bindGroupLayout = device.createBindGroupLayout({
entries: [{
binding: 0,
- multisampled: true,
- type: 'sampled-texture',
+ type: 'multisampled-texture',
}]
});
```
Likewise when using Dawn's API, changes are needed:
```diff
wgpu::BindGroupLayoutEntry entry;
entry.binding = 0;
-entry.multisampled = true;
-entry.type = wgpu::BindingType::SampledTexture;
+entry.type = wgpu::BindingType::MultisampledTexture;
wgpu::BindGroupLayoutDescriptor desc;
desc.entries = &entry;
desc.entryCount = 1;
wgpu::BindGroupLayout bgl = device.CreateBindGroupLayout(&desc);
```
### Depth comparison texture component type
WebGPU [PR](https://github.com/gpuweb/gpuweb/pull/970)
This change requires textures sampled with a comparison sampler in a shader to be declared with the "depth comparison" component type in bind groups.
This is an example of a shader in GLSL that will require a change in its `GPUBindGroupLayout` (if not created implicitly and queried with `getBindGroupLayout`):
```glsl
#version 450
layout(set = 0, binding = 0) uniform sampler samp;
layout(set = 0, binding = 1) uniform texture2D tex;
layout(location = 0) out float result;
void main() {
float compareRef = ...;
result = texture(sampler2DShadow(tex, samp), vec3(0.5, 0.5, compareRef));
}
```
Creating a `GPUBindGroupLayout` compatible with this shader module needs to be changed:
```diff
const bgl = device.createBindGroupLayout({
entries: [{
binding: 0,
type: 'comparison-sampler',
}, {
binding: 1,
type: 'sampled-texture',
- // that's the default value
- textureComponentType: 'float',
+ textureComponentType: 'depth-comparison',
}]
});
```
Likewise when using Dawn's API, changes are needed:
```diff=
wgpu::BindGroupLayoutEntry[2] entries;
entries[0].binding = 0;
entries[0].type = wgpu::BindingType::ComparisonSampler;
entries[1].binding = 1;
entries[1].type = wgpu::BindingType::SampledTexture;
-// that's the default value
-entries[1].textureComponentType = wgpu::TextureComponentType::Float;
+entries[1].textureComponentType =
+ wgpu::TextureComponentType::ComparisonSampler;
wgpu::BindGroupLayoutDescriptor desc;
desc.entries = &entries;
desc.entryCount = 2;
wgpu::BindGroupLayout bgl = device.CreateBindGroupLayout(&desc);
```
### `textureCompressionBC` -> `texture-compression-bc`
In JavaScript the name of the `textureCompressionBC` extension changes to match the casing of other extensions so you should use the new name instead:
```diff=
const device = await adapter.requestDevice({
- extensions: ['textureCompressionBC']
+ extensions: ['texture-compression-bc']
});
```
There are no changes when using Dawn's API.
Also note that in the future the concept of extension will be referred to as (optional) features instead and the `extensions` key will be `features` instead (WebGPU [PR](https://github.com/gpuweb/gpuweb/pull/1097)).
### `rg11b10float` -> `rg11b10ufloat`
WebGPU [PR](https://github.com/gpuweb/gpuweb/pull/975)
The name of signed/unsigned floating point formats has been normalized to be `float` and `ufloat` respectively. In Javascript the following change is needed:
```diff
const texture = device.createTexture({
- format: 'rg11b10float',
+ format: 'rg11b10ufloat',
size: [16, 16, 1],
usage: GPUTextureUsage.SAMPLED,
});
```
Likewise when using Dawn's API, changes are needed:
```diff
wgpu::TextureDescriptor desc;
-desc.format = wgpu::TextureFormat::RG11B10Float;
+desc.format = wgpu::TextureFormat::RG11B10Ufloat;
desc.size = {16, 16, 1};
desc.usage = wgpu::TextureUsage::Sampled;
wgpu::Texture texture = device.CreateTexture(&desc);
```
### `bc6h-rgb-sfloat` -> `bc6h-rgb-float`
WebGPU [PR](https://github.com/gpuweb/gpuweb/pull/1011)
This is similar to the item above, in JavaScript the following change is needed:
```diff
const texture = device.createTexture({
- format: 'bc6h-rgb-sfloat',
+ format: 'bc6h-rgb-float',
size: [16, 16, 1],
usage: GPUTextureUsage.SAMPLED,
});
```
Likewise when using Dawn's API, changes are needed:
```diff
wgpu::TextureDescriptor desc;
-desc.format = wgpu::TextureFormat::BC6HRGBFloat;
+desc.format = wgpu::TextureFormat::BC6HRGBUfloat;
desc.size = {16, 16, 1};
desc.usage = wgpu::TextureUsage::Sampled;
wgpu::Texture texture = device.CreateTexture(&desc);
```
## New features and improvements
### Linux support coming soon!
**TL;DR `google-chrome-unstable --enable-unsafe-webgpu --enable-features=Vulkan,UseSkiaRenderer`**
WebGPU will soon be available on Chromium Dev on Linux! It will be behind the usual `--enable-unsafe-webgpu` flag, but will also require the `--enable-features=Vulkan,UseSkiaRenderer` command line flag. The second flag is required to make the Vulkan-based WebGPU implementation be able to interoperate with Skia (Chromium's 2D graphics library) in Vulkan mode too. Interoperability between WebGPU backed by Vulkan and Skia using OpenGL is more difficult and doesn't work at this time.
Note that the implementation of WebGPU on Vulkan is less mature than on the other graphics APIs so there might be glitches but also code paths that are much slower. Please [file bugs](https://bugs.chromium.org/p/dawn/issues/list) if you find issues or unexpected slowdowns!
### Experimental WGSL support available in Chromium
WGSL is the WebGPU Shading Language developed by the "GPU for the Web" W3C group and its work-in-progress specification is [available online](https://gpuweb.github.io/gpuweb/wgsl). Previously Chromium only supported `GPUShaderModules` being created from SPIR-V code, but there is now an experimental path to create `GPUShaderModules` from WGSL source strings.
WGSL support is still experimental and doesn't support the whole language. For example lexical scoping and some bindings aren't implemented yet, but will come soon. All of the [webgpu-samples](https://austineng.github.io/webgpu-samples/) have WGSL versions that you can use as examples of how to use WGSL.
In the future when the WGSL specification stabilizes and the implementation in Chromium is more mature, the SPIR-V path to creating `GPUShaderModules` will be deprecated (with way more than two weeks) in favor of the WGSL path. The goal is that when WebGPU becomes generally available, WGSL will be the only path to creating `GPUShaderModules` from JavaScript.
Here's how to create a WGSL `GPUShaderModule` in JavaScript:
```js
const module = device.createModule({
code: `
[[location(0)]] var<out> outColor : vec4<f32>;
[[stage(fragment)]]
fn main() -> void {
outColor = vec4<f32>(1.0, 0.0, 0.0, 1.0);
return;
}`,
});
```
And here's how to do it using Dawn's API:
```cpp
wgpu::ShaderModuleWGSLDescriptor wgslDesc;
wgslDesc.source = R"(
[[location(0)]] var<out> outColor : vec4<f32>;
[[stage(fragment)]]
fn main() -> void {
outColor = vec4<f32>(1.0, 0.0, 0.0, 1.0);
return;
})";
wgpu::ShaderModuleDescriptor descriptor;
descriptor.nextInChain = &wgslDesc;
wgpu::ShaderModule module = device.CreateShaderModule(&descriptor);
```
### `GPUQueue.writeTexture`
`GPUQueue.writeTexture` is a new API that allows writing pixel data to textures easily. It is very similar to `GPUQueue.writeBuffer` but takes more detailed arguments to detail which part of the texture to update and how the texture data is laid out in the `ArrayBuffer`.
The arguments to `writeTexture` are similar to `GPUCommandEncoder.copyBufferToTexture` and contain a description of the first pixel of the copy in the texture (`GPUTextureCopyView`), the source data (`BufferSource`) and how to interpret its data as a texture (`GPUTextureDataLayout`) and the size (`GPUExtent3D`) of the copy between the real texture, and the JavaScript buffer viewed as a texture.
``` webidl
writeTexture(GPUTextureCopyView destination,
BufferSource data,
GPUTextureDataLayout dataLayout,
GPUExtent3D size);
```
Here's an example that puts some data in a 2D texture in JavaScript (the `COPY_DST` usage is required for use with `writeTexture`):
```js
const texture = device.createTexture({
dimension: '2d',
size: [2, 2, 1],
format: 'rgba8unrom',
usage: GPUTextureUsage.SAMPLED | GPUTextureUsage.COPY_DST,
});
const pixelData = new Uint32Array([
0xFF000000,
0x00FF0000,
0x0000FF00,
0x000000FF,
]);
device.defaultQueue.writeTexture({
texture,
// Other dictionary members are optional, these are the default values.
mipLevel: 0,
origin: {x: 0, y: 0, z: 0},
}, pixelData, {
// The offset of the first pixel in `pixelData`, default to 0
offset: 0,
// Contrary to `copyBufferToTexture` there are no alignment constraints.
bytesPerRow: 2 * 4,
}, {
width: 2,
height: 2,
depth: 1,
})
```
Similarly, using Dawn's API:
```cpp
wgpu::TextureDescriptor desc;
desc.dimension = wgpu::TextureDimension::e2D;
desc.size = {2, 2, 1};
desc.format = wgpu::TextureFormat::RGBA8Unorm;
desc.usage = wgpu::TextureUsage::Sampled | wgpu::TextureUsage::CopyDst;
wgpu::Texture texture = device.CreateTexture(&desc);
uint32_t pixelData[] = {
0xFF000000,
0x00FF0000,
0x0000FF00,
0x000000FF,
};
wgpu::TextureCopyView dstView;
dstView.texture = texture;
// Other parameters have default values, they are used below.
dstView.offset = 0;
dstView.origin = {0, 0, 0};
wgpu::TextureDataLayout srcLayout;
srcLayout.offset = 0; // Note, this is just added to the `data` pointer.
// Contrary to `copyBufferToTexture` there are no alignment constraints.
srcLayout.bytesPerRow: 2 * 4;
wgpu::Extent3D copySize = {2, 2, 1};
queue.WriteTexture(&dstView, data, sizeof(data), &srcLayout, ©Size);
```
### Timestamp queries
Timestamp queries allow fine-grained measurement of the execution time of operations on the GPU to help benchmark during the development of applications. It is enabled via the `'timestamp-query'` extension which might not always be present for security and privacy reasons.
These queries are used via a new `GPUQuerySet` object (that will also be used for occlusion in pipeline statistics queries in the future). That object collects the value of queries that can be then copied into a `COPY_DST` `GPUBuffer`.
Here's an example of how to use them in JavaScript:
```js
const device = adapter.requestDevice({
extensions: ['timestamp-query']
});
const querySet = device.createQuerySet({
count: 2,
type: 'timestamp',
});
const resolveBuffer = device.createBuffer({
size: 8 * 2; // Each query writes 64 bits of data.
usage: GPUBufferUsage.QUERY_RESOLVE | GPUBufferUsage.COPY_SRC;
});
const readbackBuffer = device.createBuffer({
size: 8 * 2,
usage: GPUBufferUsage.COPY_DST | GPUBufferUsage.MAP_READ,
});
const encoder = device.createCommandEncoder();
// Do an expensive operation and write the timestamp before in query 0
// and the timestamp after in query 1.
encoder.writeTimestamp(querySet, 0);
encodeExpensiveCommandsOn(encoder);
encoder.writeTimestamp(querySet, 1);
// "Resolve" the queries and put them in a map-readable buffer.
encoder.resolveQuerySet(querySet, 0, 2, resolveBuffer, 0);
encoder.copyBufferToBuffer(resolveBuffer, 0, readbackBuffer, 0, 2 * 8);
device.defaultQueue.submit([encoder.finish()]);
await = readbackBuffer.mapAsync(GPUMapMode.READ);
const queryData = readbackBuffer.getMappedRange();
// Now jumps through hoops to substract two 64bit integers in JavaScript.
```
Note that the delta between two timestamp queries is supposed to be in nanoseconds, but the conversion to that unit hasn't been implemented in Dawn yet. This means that the results are in "ticks" that are API and hardware dependent. It can still help benchmark if something gets faster or slower, or you can do the conversion manually if you know how long is a "tick".
Timestamp queries can also be used inside render and compute passes, although the timings inside render-passes can be very misleading on tiler GPUs because draws aren't processed one after the other.
Using Dawn's API for timestamp queries is similar but left as an exercise to the reader.
### Depth-stencil copies
#### Texture-Buffer Copies from/to stencil
Copies between a buffer and the stencil aspect of a texture for combined depth-stencil formats are now supported by specifying `'stencil-only'` on `GPUTextureCopyView`. Stencil data is packed in the buffer.
```javascript
const buffer = ...;
/* Buffer data looks like: [
* 2 bytes stencil; 254 bytes padding ..
* 2 bytes stencil; 254 bytes padding ..
* ...
* ]
*/
const texture = device.createTexture({
size: [2, 2, 1],
format: 'depth24plus-stencil8',
usage: GPUTextureUsage.COPY_DST,
});
const commandEncoder = device.createCommandEncoder();
commandEncoder.copyBufferToTexture({
buffer,
// (default) offset: 0,
bytesPerRow: 256,
rowsPerImage: 2,
}, {
texture,
// Defaults to 'all'
aspect: 'stencil-only',
// (default) mipLevel: 0,
// (default) origin: [0, 0, 0],
}, [2, 2, 1]);
```
Copies to/from single aspect formats like `stencil8` do not need to specify the aspect. It may be `all` or `stencil-only`.
*Note: Chromium does not currently support the `stencil8` format.*
```javascript
const texture = device.createTexture({
format: 'stencil8',
...
});
commandEncoder.copyBufferToTexture({ buffer: ... }, {
texture,
// (default) aspect: 'all',
...
}, ...);
```
#### Texture-Buffer Copies from depth
Copies from a buffer *to* the depth aspect of a texture are **not** supported because the implementation would need to validate or clamp texel values to ensure they are between 0 and 1. To write into the depth aspect, execute a render pass that draws a full screen quad and writes depth.
Copies *from* the depth aspect of a texture to a buffer for the `depth32float` format are supported by specifying either `all` (default) or `depth-only` on `GPUTextureCopyView`.
#### Texture-Texture Copies (all aspects)
Texture to texture copies must copy all aspects of combined depth-stencil textures. This is a restriction from the Metal API. It is a validation error to specify a `GPUTextureAspect` that is not `all` in a texture to texture copy.
### DepthBias
When rendering to shadow maps it is often useful to add a small bias or offset to the rendered depth to avoid light bleeding when using the shadow map in the main render. This features is called `gl.polygonOffset` in WebGL but is referred to as `depthBias` in WebGPU. It is part of the rasterization state of render pipelines:
```js
const pipeline = device.createRenderPipeline({
// ...
rasterizationState: {
// Defaults to 0
depthBias: 1,
// Defaults to 0
depthBiasSlopeScale: 2,
// Defaults to 0, which doesn't clamp.
depthBiasClamp: 0.00000001
}
})
```
### RGB9E5
This is an additional texture format that will be supported on all WebGPU implementations. It is useful to represent HDR textures since it can all colors values of `rgba8unorm` with even more precision and color values larger than 1. It's a funny floating point format with a shared 5 bit exponent, no `NaN` or `Infinity` and no sign-bit. It is not renderable though.
It can be used in Javascript with the following:
```js
const texture = device.createTexture({
format: 'rgb9e5float',
size: [16, 16, 1],
usage: GPUTextureUsage.COPY_DST | GPUTextureUsage.SAMPLED,
});
```
It can be used with Dawn's API with the following:
```cpp
wgpu::TextureDescriptor desc;
desc.format = wgpu::TextureFormat::RGB9E5Float;
desc.size = {16, 16, 1};
desc.usage = wgpu::TextureUsage::CopyDst | wgpu::TextureUsage::Sampled;
wgpu::Texture texture = device.CreateTexture(&desc);
```