Chromium's WebGPU implementation and Dawn's API try to closely follow changes to the WebGPU specification. When the WebGPU IDL changes, Chromium and Dawn will try to support both the "old" and the "new" version of the IDL at the same time so prototypes can be updated. In JavaScript, uses of the "old" path will result in a console warning, while when using Dawn directly, the "old" path will print a warning to stderr.
Note that all changes to Dawn's API make it closer to webgpu.h
that we hope will allow applications to target both Dawn, and wgpu in native before being compiled in WASM. Emscripten will also be updated from the "old" to the "new" API but won't have the smooth transition since developers control which version of emscripten they use.
A couple weeks after an update like this one, the "old" version will be removed. This means that the "old" version of the items below will start being removed from Chromium/Dawn starting on 2020-08-11.
Previous PSAs:
WebGPU PR
Instead of using GPUDevice.createBufferMapped
, creating a buffer mapped at creation is done by setting GPUBufferDescriptor.mappedAtCreation
to true
. Then the mapping of the buffer can be retrieved by calling GPUBuffer.getMappedRange
.
In JavaScript, creation of buffers mapped at creation must be updated:
-const [buffer, mapping] = device.createBufferMapped({
+const buffer = device.createBuffer({
+ mappedAtCreation: true,
size: 4,
usage: GPUBufferUsage.UNIFORM,
});
+const mapping = buffer.getMappedRange();
Likewise when using Dawn’s API, changes are needed:
wgpu::BufferDescriptor descriptor;
descriptor.size = 4;
descriptor.usage = wgpu::BufferUsage::Uniform;
+descriptor.mappedAtCreation = true;
+wgpu::Buffer buffer = device.CreateBuffer(&descriptor);
+void* mapping = buffer.GetMappedRange();
-wgpu::CreateBufferMappedResult result = device.CreateBufferMapped(&descriptor);
-wgpu::Buffer buffer = result.buffer;
-void* mapping = result.data;
Instead of using GPUBuffer.mapReadAsync
and GPUBuffer.mapWriteAsync
, both forms of asynchronous buffer mapping are now done through the GPUBuffer.mapAsync
call.
GPUBuffer.mapAsync
takes additional parameters to select the type of mapping operation to do (GPUMapMode.READ
or GPUMapMode.WRITE
) as well as optional arguments to select which range of the buffer to map. Mapping any range of the buffer marks the whole buffer as mapped, but allows Web engines to move data only for the selected range.
Calling GPUBuffer.mapAsync
returns a Promise
that resolves when the buffer is mapped, at which point it is possible to get the mapped range with GPUBuffer.getMappedRange()
.
In JavaScript uses of mapReadAsync
and mapWriteAsync
must be updated to use mapAsync
instead:
-await data = buffer.mapReadAsync();
+await buffer.mapAsync(GPUMapMode.READ);
+const data = buffer.getMappedRange();
-await data = buffer.mapWriteAsync();
+await buffer.mapAsync(GPUMapMode.WRITE);
+const data = buffer.getMappedRange();
Likewise when using Dawn's API, changes are needed:
-void OnBufferMapped(wgpu::BufferMapAsyncStatus status,
- const void* data,
- uint64_t dataLength,
- void* userdata) {
- // Do something with `data`
-}
-buffer.MapReadAsync(OnBufferMapped, myUserdata);
+void OnBufferMapped(wgpu::BufferMapAsyncStatus status,
+ void* userdata) {
+ const void* data = buffer.GetConstMappedRange();
+}
+buffer.MapAsync(wgpu::MapMode::Read, 0, bufferSize,
+ OnBufferMapped, myUserdata);
Note that in webgpu.h
, for type safety, there are two ways to get the mapped range:
wgpu::Buffer::GetConstMappedRange
that returns a const void*
that can be called with both Read
and Write
mapping modes, andwgpu::Buffer::GetMappedRange
that returns a void*
and can only be called when the mapping mode is Write
.GPUBuffer.setSubData
-> GPUQueue.writeBuffer
WebGPU PR
In JavaScript uses the method on GPUBuffer
needs to be updated to use the default queue instead:
-buffer.setSubData(0, 4, Uint32Array([42]));
+queue.writeBuffer(buffer, 0, 4, Uint32Array([42]));
Likewise when using Dawn's API, changes are needed:
-buffer.SetSubData(0, 4, &data);
+queue.WriteBuffer(buffer, 0, 4, &data);
GPUTextureDescriptor
.arrayLayerCount
-> .size.depth
WebGPU PR
For 2D textures, the texture's depth now represents how many array layer it contains.
In JavaScript texture creation code needs to be updated:
const texture = device.createTexture({
format: "rgba8unorm",
usage: GPUTextureUsage.SAMPLED | GPUTextureUsage.COPY_SRC,
- arrayLayerCount: 10,
- size: { width: 16, height: 16, depth: 1 },
+ size: { width: 16, height: 16, depth: 10 },
});
Likewise when using Dawn’s API, changes are needed:
wgpu::TextureDescriptor descriptor;
descriptor.format = wgpu::TextureFormat::RGBA8Unorm;
descriptor.usage = wgpu::TextureUsage::Sampled | wgpu::TextureUsage::CopySrc;
-descriptor.arrayLayerCount = 10;
-descriptor.size = {16, 16, 1};
+descriptor.size = {16, 16, 10};
wgpu::Texture texture = device.CreateTexture(&descriptor);
GPUTextureCopyView
.arrayLayer
-> .origin.z
WebGPU PR
To match the change in GPUTextureDescriptor
, the Z component of a copy from or to a texture now encodes the array layer to copy to. In JavaScript copies need to be updated:
encoder.copyBufferToTexture({
buffer,
bytesPerRow: 256,
}, {
texture,
- arrayLayer: 9,
- origin: {x: 0, y: 0, z: 0},
+ origin: {x: 0, y: 0, z: 9},
}, {width: 16, height: 16, depth: 1});
Likewise when using Dawn’s API, changes are needed:
wgpu::BufferCopyView srcView;
srcView.buffer = buffer;
srcView.layout.bytesPerRow = 256;
wgpu::TextureCopyView dstView;
dstView.texture = texture;
-dstView.arrayLayer = 9;
-dstView.origin = {0, 0, 0};
+dstView.origin = {0, 0, 9};
wgpu::Extent3D copySize = {16, 16, 9};
encoder.CopyBufferToTexture(&srcView, &dstView, ©Size);
wgpu::BufferCopyView
containing a wgpu::TextureDataLayout
WebGPU PR
To reuse the layout concepts of GPUBufferCopyView
between writeTexture
and copies between buffers and textures, all the members of wgpu::BufferCopyView
that deal with layout (i.e. all but buffer
) have been moved to a layout
member of type wgpu::TextureDataLayout
. No changes are needed in JavaScript but when using Dawn the following changes are needed:
wgpu::BufferCopyView srcView;
srcView.buffer = buffer;
-srcView.offset = 0;
-srcView.bytesPerRow = 256;
-srcView.rowsPerImage = 16;
+srcView.layout.offset = 0;
+srcView.layout.bytesPerRow = 256;
+srcView.layout.rowsPerImage = 16;
GPUBindGroupLayoutEntry.minBufferBindingSize
WebGPU now requires that at each draw / dispatch call validates that the uniform and storage bindings are big enough for what's declared in the pipeline. This can add non-trivial overhead to each draw / dispatch, so GPUBindGroupLayoutEntry
gained a new minBufferBindingSize
.
This members guarantees that each bind group with that layout will contain at least that many bytes, and guarantees that the pipeline using this layout won't use more than that many bytes. It allows skipping the size check for that buffer binding so it is strongly encouraged you use it. GPURender/ComputePipeline.getBindGroupLayout
now returns GPUBindGroupLayouts
with minBufferBindingSize
set automatically so these bind group layouts are efficient to use.
Here's an example using minBufferBindingSize
:
const bgLayout = device.createBindGroupLayout({
entries: [{
binding: 0,
visibility: GPUShaderStage.COMPUTE,
type: "uniform-buffer",
minBufferBindingSize: 16,
}],
});
const bindgroup = device.createBindGroup({
layout: bgLayout,
entries: [{
binding: 0,
// Below size, or the default value computed for size, must
// be at least 16. Otherwise a validation error occurs.
resource: {buffer, offset: 64, size: 16},
}],
});
const pipeline = device.createComputePipeline({
// The "main" entry point of the module must not use more than
// 16 bytes of the uniform at group=0 binding=0. Otherwise a
// validation error occurs.
layout: device.createPipelineLayout({bindGroupLayouts: [bgLayout]}),
module,
entryPoint: "main",
})
Previously Chromium / Dawn enforced the WebGPU resource usage validation at resource granularity.
This meant that if one mip-level of a texture was used an attachment for a render pass, the whole texture was unavailable to be sampled inside the render pass.
This limitation has been lifted! This means it is now possible to generate mipmaps of a texture with render passes sampling from larger mip-levels. See this (old-ish) example of a mipmap generation function.
The depth component of depth-stencil textures can now be sampled in shaders as if they were r32float
textures. In addition it is possible to use comparison sampling (useful for shadow mapping techniques like PCF) by setting GPUSamplerDescriptor.compare
and using it as a comparison-sampler
binding.
Previously Chromium / Dawn enforced a limit of at most 16 bindings in GPUBindGroupLayoutDescriptor.entries
between all stages and all binding types. This was overly restrictive and more complex applications would run into it regularly. Dawn now has the same validation as in the WebGPU specification with per-stage and per-binding-type limits.
GPURenderPipelineDescriptor
sampleMask
and alphaToCoverageEnabled
These two members of the render pipeline descriptor allow controlling more precisely what happens when rendering to a multisampled texture. Please see the alphaToCoverageEnabled
section of the spec and the sampleMask
section of the spec.