Skip to main content

Bindings

A key responsibility of any GPU API is to enable the application to set up data so that it can be accessed by shaders. in luma.b

The terminology can be a little confusing. To make it easy to cross-reference other code and documentation, luma.gl attempts to roughly follow WebGPU / WGSL conventions. The following terms are used:

  • layouts - metadata for various shader connection points
  • attribute layout - actual values for attributes
  • attribute buffers - actual values for attributes
  • binding layout - actual values for attributes
  • bindings - actual values for

ShaderLayout

Shader code (whether in WGSL or GLSL) contains declarations of attributes, uniform blocks, samplers etc. Collectively, these define the data that needs to be bound before the shader can execute on the GPU. And since the bindings are performed on the CPU, a certain amount of metadata is needed in JavaScript to describe what data a specific shader or pair of shaders expects.

luma.gl defines the ShaderLayout type to collect a description of a (pair of) shaders. A ShaderLayout is required when creating a RenderPipeline or ComputePipeline.

Shaders expose numeric bindings, however in applications, named bindings tend to be more convenient.

Note: ShaderLayouts can be created manually (by reading the shader code), or be automatically generated by parsing shader source code or using e.g. the WebGL program introspection APIs.

type ShaderLayout = {
attributes: {
{name: 'instancePositions', location: 0, format: 'float32x2', stepMode: 'instance'},
{name: 'instanceVelocities', location: 1, format: 'float32x2', stepMode: 'instance'},
{name: 'vertexPositions', location: 2, format: 'float32x2', stepMode: 'vertex'}
},

bindings?: {
{name: 'projectionUniforms', location: 0, type: 'uniforms'},
{name: 'textureSampler', location: 1, type: 'sampler'},
{name: 'texture', location: 2, type: 'texture'}
}
}

device.createRenderPipeline({
layout,
attributes,
bindings
});

Attributes

const shaderLayout: ShaderLayout = {
attributes: [
{name: 'instancePositions', location: 0, format: 'float32x2', stepMode: 'instance'},
{name: 'instanceVelocities', location: 1, format: 'float32x2', stepMode: 'instance'},
{name: 'vertexPositions', location: 2, format: 'float32x2', stepMode: 'vertex'}
],
...
};

Buffer Maps

Buffer mapping is an optional mechanism that enables more sophisticated GPU attribute buffer layouts.

Buffer mappings offer control of GPU buffer vertex formats, as well as offsets, strides, interleaving etc.

info

Pipeline attribute layouts are immutable and need to be defined when a pipeline is created. All buffers subsequently supplied to that pipeline need to conform to any buffer mapping properties specified during pipeline creation (e.g. the vertex format may be locked to unorm8x4).

The bufferLayout field in the example below specifies that

const bufferLayout: BufferLayout = {
{name: 'instanceColors', format: 'unorm8x4'},
{name: 'instanceVelocities', format: 'interleaved', attributes: [
{name: 'instancePositions'}
{name: 'instanceVelocities'}
]},
...
};

device.createRenderPipeline({
shaderLayout,
// We want to use "non-standard" buffers: two attributes interleaved in same buffer
bufferLayout: [
{name: 'instanceColors', format: 'unorm8x4'},
],
attributes: {},
bindings: {}
});

Model usage

new Model(device, {
attributeLayout:
instancePositions: {location: 0, format: 'float32x2', stepMode: 'instance'},
instanceVelocities: {location: 1, format: 'float32x2', stepMode: 'instance'},
vertexPositions: {location: 2, format: 'float32x2', stepMode: 'vertex'}
};
})

WGSL vertex shader

struct Uniforms {
modelViewProjectionMatrix : mat4x4<f32>;
};
[[binding(0), group(0)]] var<uniform> uniforms : Uniforms; // BINDING 0

struct VertexOutput {
[[builtin(position)]] Position : vec4<f32>;
[[location(0)]] fragUV : vec2<f32>;
[[location(1)]] fragPosition: vec4<f32>;
};

[[stage(vertex)]]
fn main([[location(0)]] position : vec4<f32>,
[[location(1)]] uv : vec2<f32>) -> VertexOutput {
var output : VertexOutput;
output.Position = uniforms.modelViewProjectionMatrix * position;
output.fragUV = uv;
output.fragPosition = 0.5 * (position + vec4<f32>(1.0, 1.0, 1.0, 1.0));
return output;
}

WGSL FRAGMENT SHADER

[[group(0), binding(1)]] var mySampler: sampler; // BINDING 1
[[group(0), binding(2)]] var myTexture: texture_2d<f32>; // BINDING 2

[[stage(fragment)]]
fn main([[location(0)]] fragUV: vec2<f32>,
[[location(1)]] fragPosition: vec4<f32>) -> [[location(0)]] vec4<f32> {
return textureSample(myTexture, mySampler, fragUV) * fragPosition;
}