GPU buffers are effectively arrays of contiguous memory that has been "uploaded" to the GPU and can be accessed efficiently by the GPU.
In WebGL1, buffers can be:
Buffers were significantly improved in WebGL2. In WebGL2 it is possible to:
In WebGL1 buffers are mainly used to
In WebGL2 buffers can also be used to:
WebGL defines a number of binding points for buffers. These are all managed under the hood by luma.gl. In WebGL buffer can be used repeatedly to represent different types of data (i.e. bound to different WebGL binding points) with one exception.
Any buffer that has been used to describe indices (
target: GL.ELEMENT_ARRAY_BUFFER), can not be used in any other context.
Buffers have a
usage parameter that is a hint describing how they are updated. The default value is
The cost of transferring memory between CPU and GPU depends on many factors. E.g. on whether your GPU is using a unified memory architecture or not, the memory bandwidth of your system etc.
Some good rules of thumb:
When updating buffers setting the
usage parameter on creation can have an impact.