Platform/GFX/WebGPU

From MozillaWiki
< Platform‎ | GFX
Jump to: navigation, search

WebGPU is the new API for compute and graphics on the Web. It's developed by the GPU for the Web community group at W3C.

Architecture

Firefox's WebGPU implementation has the following layers:

  • Content-visible WebIDL bindings, generated in the usual way from dom/webidl/WebGPU.webidl.
  • C++ implementations of those bindings in dom/webgpu, which are mostly concerned with marshaling content requests to be sent to the GPU process.
  • The PWebGPU IPDL protocol, which carries those requests.
  • Rust code in gfx/wgpu_bindings to adapt our PWebGPU handlers to call wgpu_core methods.
  • The wgpu GitHub project, an independent open source project implementing the core of the WebGPU API in Rust.
  • The Naga GitHub project, which translates WebGPU's shading language into platform API shading languages like SPIR-V, HLSL, and Metal Shading Language. This is used internally by wgpu.

Presentation

Content side

CanvasContext creates a new wr::ExternalImageId. It then sends a DeviceCreateSwapChain message to the GPU process to associate this external image with the swapchain ID (via Device::InitSwapChain). It also provides the size/format of the swapchain, and generates the buffer IDs for reading back the data.

There is a Texture object created with matching dimensions. It's communicated with the other pieces via WebRenderLocalCanvasData.

The nsDisplayCanvas::CreateWebRenderCommands, which runs on a display list build, checks if the image key exists and matches the display list. Otherwise, it creates a new key and associates it with the external image via AddPrivateExternalImage. It then pushes the image ID into the display list.

The info about the swapchain is put into WebRenderLocalCanvasData, which can be accessed by CreateOrRecycleWebRenderUserData. Part of that logic is done in UpdateWebRenderLocalCanvasData, and another part is at the end of the CanvasContextType::WebGPU case. Finally, we send the SwapChainPresent message to the GPU.

GPU side

There is mCanvasMap - a map of "external Id" to associated PresentationData. It contains the device, the queue, and the pool of buffers used to read back the data.

On RecvDeviceCreateSwapChain, the new PresentationData is created and associated with the external ID. We also create a MemoryTextureHost object associated with this external ID.

On RecvSwapChainPresent, we find an available buffer to read the data into, or create it. We then record a new command buffer to read back the data into this buffer, and submit it right away. Finally, we request the buffer for mapping, specifying PresentCallback.

The callback just copies the data (from the mapped buffer) into the MemoryTextureHost storage. When WebRender builds a frame, it gets TextureUpdateSource::External update, which is resolved by locking the external texture handler in upload_to_texture_cache, and getting the ExternalImageSource::RawData for the data we read back from wgpu.

Links

Bug tracking

All relevant bugs for this project are tracked by the Graphics: WebGPU component. See all open bugs, or all bugs (including closed ones)

Current work is tracked in the following bugs:

Demos:

General information: