Skip to content
Merged
Prev Previous commit
Next Next commit
Update docs
  • Loading branch information
Aleksander Katan committed Sep 18, 2025
commit 869d2d1925d2deceda62eb47505bfa2cbb271e21
45 changes: 9 additions & 36 deletions apps/typegpu-docs/src/content/docs/fundamentals/utils.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,39 +6,7 @@ description: A list of various utilities provided by TypeGPU.
## *prepareDispatch*

The `prepareDispatch` function streamlines running simple computations on the GPU.
Under the hood, it wraps the callback in a `TgpuFn`, creates a compute pipeline, and returns a dispatch function to execute it.

For example, `prepareDispatch` can simplify writing a pipeline helping reduce serialization overhead when initializing buffers with data.

```ts twoslash
import tgpu, { prepareDispatch } from 'typegpu';
import * as d from 'typegpu/data';

const root = await tgpu.init();

const Boid = d.struct({
index: d.u32,
pos: d.vec3f,
});

// buffer of 2048 Boids
const boidsMutable = root.createMutable(d.arrayOf(Boid, 2048));

const initialize = prepareDispatch(root, (x) => {
'kernel';
const boidData = Boid({ index: x, pos: d.vec3f() });
boidsMutable.$[x] = boidData;
});

// run the callback for each x in range 0..2047
initialize.dispatch(2048);
```

:::note
Remember to mark the callback with `'kernel'` directive to let TypeGPU know that this function is TGSL.
:::

The returned `PreparedDispatch` object can be used for multiple dispatches.
Under the hood, it wraps the callback in a `TgpuFn`, creates a compute pipeline, and returns an object with dispatch method that executes the pipeline.
Since the pipeline is reused, there’s no additional overhead for subsequent calls.

```ts twoslash
Expand All @@ -61,7 +29,12 @@ doubleUp.dispatch(4);
console.log(await data.read()); // [0, 8, 16, 24, 16, 20, 24, 28]
```

:::note
Remember to mark the callback with `'kernel'` directive to let TypeGPU know that this function is TGSL.
:::

The callback can have up to three arguments (dimensions).
`prepareDispatch` can simplify writing a pipeline helping reduce serialization overhead when initializing buffers with data.
Buffer initialization commonly uses random number generators.
For that, you can use the [`@typegpu/noise`](TypeGPU/ecosystem/typegpu-noise) library.

Expand Down Expand Up @@ -123,15 +96,15 @@ console.log(await buffer1.read()); // [2, 4, 6];
console.log(await buffer2.read()); // [4, 8, 16, 32];
```

It is highly recommended NOT to use `prepareDispatch` for:
It is recommended NOT to use `prepareDispatch` for:

- More complex compute shaders.
When using `prepareDispatch`, it is impossible to switch bind groups or to change workgroup sizes.
When using `prepareDispatch`, it is impossible to change workgroup sizes or to use [slots](/TypeGPU/fundamentals/slots).
For such cases, a manually created pipeline would be more suitable.

- Small calls.
Usually, for small data the shader creation and dispatch is more costly than serialization.
Small buffers can be more efficiently initialized with `buffer.write()` method.
Small buffers can be more efficiently initialized with the `buffer.write()` method.

:::note
The default workgroup sizes are:
Expand Down