Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Add docs
  • Loading branch information
Aleksander Katan committed Sep 18, 2025
commit c21b3cce6cdc123eb77d2f43a508ac338b520cd1
68 changes: 51 additions & 17 deletions apps/typegpu-docs/src/content/docs/fundamentals/utils.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ description: A list of various utilities provided by TypeGPU.

## *prepareDispatch*

The `prepareDispatch` function simplifies running simple computations on the GPU.
The `prepareDispatch` function streamlines running simple computations on the GPU.
Under the hood, it wraps the callback in a `TgpuFn`, creates a compute pipeline, and returns a dispatch function to execute it.

This can help reduce serialization overhead when initializing buffers with data.
For example, `prepareDispatch` can simplify writing a pipeline helping reduce serialization overhead when initializing buffers with data.

```ts twoslash
import tgpu, { prepareDispatch } from 'typegpu';
Expand All @@ -24,20 +24,21 @@ const Boid = d.struct({
// buffer of 2048 Boids
const boidsMutable = root.createMutable(d.arrayOf(Boid, 2048));

const dispatch = prepareDispatch(root, (x) => {
const initialize = prepareDispatch(root, (x) => {
'kernel';
const boidData = Boid({ index: x, pos: d.vec3f() });
boidsMutable.$[x] = boidData;
});
// run callback for each x in range 0..2047
dispatch(2048);

// run the callback for each x in range 0..2047
initialize.dispatch(2048);
```

:::note
Remember to mark the callback with `'kernel'` directive to let TypeGPU know that this function is TGSL.
:::

The returned dispatch function can be called multiple times.
The returned `PreparedDispatch` object can be used for multiple dispatches.
Since the pipeline is reused, there’s no additional overhead for subsequent calls.

```ts twoslash
Expand All @@ -52,12 +53,11 @@ const doubleUp = prepareDispatch(root, (x) => {
data.$[x] *= 2;
});

doubleUp(8);
doubleUp(8);
doubleUp(4);
doubleUp.dispatch(8);
doubleUp.dispatch(8);
doubleUp.dispatch(4);

// no need to call `onSubmittedWorkDone()` because the command encoder
// will queue the read after `doubleUp` anyway
// the command encoder will queue the read after `doubleUp`
console.log(await data.read()); // [0, 8, 16, 24, 16, 20, 24, 28]
```

Expand All @@ -82,17 +82,51 @@ prepareDispatch(root, (x, y) => {
'kernel';
randf.seed2(d.vec2f(x, y).div(1024));
waterLevelMutable.$[x][y] = 10 + randf.sample();
})(1024, 512);
}).dispatch(1024, 512);
// callback will be called for x in range 0..1023 and y in range 0..511

// (optional) read values in JS
console.log(await waterLevelMutable.read());
```

It is highly recommended NOT to use `dispatch` for:
Analogously to `TgpuComputePipeline`, the result of `prepareDispatch` can have bind groups bound using the `with` method.

```ts twoslash
import tgpu, { prepareDispatch } from 'typegpu';
import * as d from 'typegpu/data';
import * as std from 'typegpu/std';
const root = await tgpu.init();
// ---cut---
const layout = tgpu.bindGroupLayout({
buffer: { storage: d.arrayOf(d.u32), access: 'mutable' },
});
const buffer1 = root
.createBuffer(d.arrayOf(d.u32, 3), [1, 2, 3]).$usage('storage');
const buffer2 = root
.createBuffer(d.arrayOf(d.u32, 4), [2, 4, 8, 16]).$usage('storage');
const bindGroup1 = root.createBindGroup(layout, {
buffer: buffer1,
});
const bindGroup2 = root.createBindGroup(layout, {
buffer: buffer2,
});

const test = prepareDispatch(root, (x) => {
'kernel';
layout.$.buffer[x] *= 2;
});

test.with(layout, bindGroup1).dispatch(3);
test.with(layout, bindGroup2).dispatch(4);

console.log(await buffer1.read()); // [2, 4, 6];
console.log(await buffer2.read()); // [4, 8, 16, 32];
```

It is highly recommended NOT to use `prepareDispatch` for:

- More complex compute shaders.
When using `dispatch`, it is impossible to switch bind groups or to change workgroup sizes.
When using `prepareDispatch`, it is impossible to switch bind groups or to change workgroup sizes.
For such cases, a manually created pipeline would be more suitable.

- Small calls.
Expand Down Expand Up @@ -122,14 +156,14 @@ import * as d from 'typegpu/data';
const root = await tgpu.init();
// ---cut---
const callCountMutable = root.createMutable(d.u32, 0);
const dispatch = prepareDispatch(root, () => {
const compute = prepareDispatch(root, () => {
'kernel';
callCountMutable.$ += 1;
console.log('Call number', callCountMutable.$);
});

dispatch();
dispatch();
compute.dispatch();
compute.dispatch();

// Eventually...
// "[GPU] Call number 1"
Expand Down
9 changes: 9 additions & 0 deletions packages/typegpu/src/prepareDispatch.ts
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,10 @@ class PreparedDispatch<TArgs> {
this.#pipeline = pipeline;
}

/**
* Returns a new PreparedDispatch with the specified bind group bound.
* Analogous to `TgpuComputePipeline.with()`.
*/
with(
bindGroupLayout: TgpuBindGroupLayout,
bindGroup: TgpuBindGroup,
Expand All @@ -51,6 +55,11 @@ class PreparedDispatch<TArgs> {
);
}

/**
* Run the prepared dispatch.
* Unlike `TgpuComputePipeline.dispatchWorkgroups()`,
* this method takes in the number of threads to run in each dimension.
*/
get dispatch(): DispatchForArgs<TArgs> {
return this.#createDispatch(this.#pipeline);
}
Expand Down