Skip to content
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
97 changes: 52 additions & 45 deletions apps/typegpu-docs/src/content/docs/fundamentals/utils.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,39 +5,8 @@ description: A list of various utilities provided by TypeGPU.

## *prepareDispatch*

The `prepareDispatch` function simplifies running simple computations on the GPU.
Under the hood, it wraps the callback in a `TgpuFn`, creates a compute pipeline, and returns a dispatch function to execute it.

This can help reduce serialization overhead when initializing buffers with data.

```ts twoslash
import tgpu, { prepareDispatch } from 'typegpu';
import * as d from 'typegpu/data';

const root = await tgpu.init();

const Boid = d.struct({
index: d.u32,
pos: d.vec3f,
});

// buffer of 2048 Boids
const boidsMutable = root.createMutable(d.arrayOf(Boid, 2048));

const dispatch = prepareDispatch(root, (x) => {
'kernel';
const boidData = Boid({ index: x, pos: d.vec3f() });
boidsMutable.$[x] = boidData;
});
// run callback for each x in range 0..2047
dispatch(2048);
```

:::note
Remember to mark the callback with `'kernel'` directive to let TypeGPU know that this function is TGSL.
:::

The returned dispatch function can be called multiple times.
The `prepareDispatch` function streamlines running simple computations on the GPU.
Under the hood, it wraps the callback in a `TgpuFn`, creates a compute pipeline, and returns an object with dispatch method that executes the pipeline.
Since the pipeline is reused, there’s no additional overhead for subsequent calls.

```ts twoslash
Expand All @@ -52,16 +21,20 @@ const doubleUp = prepareDispatch(root, (x) => {
data.$[x] *= 2;
});

doubleUp(8);
doubleUp(8);
doubleUp(4);
doubleUp.dispatch(8);
doubleUp.dispatch(8);
doubleUp.dispatch(4);

// no need to call `onSubmittedWorkDone()` because the command encoder
// will queue the read after `doubleUp` anyway
// the command encoder will queue the read after `doubleUp`
console.log(await data.read()); // [0, 8, 16, 24, 16, 20, 24, 28]
```

:::note
Remember to mark the callback with `'kernel'` directive to let TypeGPU know that this function is TGSL.
:::

The callback can have up to three arguments (dimensions).
`prepareDispatch` can simplify writing a pipeline helping reduce serialization overhead when initializing buffers with data.
Buffer initialization commonly uses random number generators.
For that, you can use the [`@typegpu/noise`](TypeGPU/ecosystem/typegpu-noise) library.

Expand All @@ -82,22 +55,56 @@ prepareDispatch(root, (x, y) => {
'kernel';
randf.seed2(d.vec2f(x, y).div(1024));
waterLevelMutable.$[x][y] = 10 + randf.sample();
})(1024, 512);
}).dispatch(1024, 512);
// callback will be called for x in range 0..1023 and y in range 0..511

// (optional) read values in JS
console.log(await waterLevelMutable.read());
```

It is highly recommended NOT to use `dispatch` for:
Analogously to `TgpuComputePipeline`, the result of `prepareDispatch` can have bind groups bound using the `with` method.

```ts twoslash
import tgpu, { prepareDispatch } from 'typegpu';
import * as d from 'typegpu/data';
import * as std from 'typegpu/std';
const root = await tgpu.init();
// ---cut---
const layout = tgpu.bindGroupLayout({
buffer: { storage: d.arrayOf(d.u32), access: 'mutable' },
});
const buffer1 = root
.createBuffer(d.arrayOf(d.u32, 3), [1, 2, 3]).$usage('storage');
const buffer2 = root
.createBuffer(d.arrayOf(d.u32, 4), [2, 4, 8, 16]).$usage('storage');
const bindGroup1 = root.createBindGroup(layout, {
buffer: buffer1,
});
const bindGroup2 = root.createBindGroup(layout, {
buffer: buffer2,
});

const test = prepareDispatch(root, (x) => {
'kernel';
layout.$.buffer[x] *= 2;
});

test.with(layout, bindGroup1).dispatch(3);
test.with(layout, bindGroup2).dispatch(4);

console.log(await buffer1.read()); // [2, 4, 6];
console.log(await buffer2.read()); // [4, 8, 16, 32];
```

It is recommended NOT to use `prepareDispatch` for:

- More complex compute shaders.
When using `dispatch`, it is impossible to switch bind groups or to change workgroup sizes.
When using `prepareDispatch`, it is impossible to change workgroup sizes or to use [slots](/TypeGPU/fundamentals/slots).
For such cases, a manually created pipeline would be more suitable.

- Small calls.
Usually, for small data the shader creation and dispatch is more costly than serialization.
Small buffers can be more efficiently initialized with `buffer.write()` method.
Small buffers can be more efficiently initialized with the `buffer.write()` method.

:::note
The default workgroup sizes are:
Expand All @@ -122,14 +129,14 @@ import * as d from 'typegpu/data';
const root = await tgpu.init();
// ---cut---
const callCountMutable = root.createMutable(d.u32, 0);
const dispatch = prepareDispatch(root, () => {
const compute = prepareDispatch(root, () => {
'kernel';
callCountMutable.$ += 1;
console.log('Call number', callCountMutable.$);
});

dispatch();
dispatch();
compute.dispatch();
compute.dispatch();

// Eventually...
// "[GPU] Call number 1"
Expand Down
4 changes: 2 additions & 2 deletions apps/typegpu-docs/src/examples/rendering/3d-fish/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ function enqueuePresetChanges() {
const buffer0mutable = fishDataBuffers[0].as('mutable');
const buffer1mutable = fishDataBuffers[1].as('mutable');
const seedUniform = root.createUniform(d.f32);
const randomizeFishPositionsDispatch = prepareDispatch(root, (x) => {
const randomizeFishPositionsOnGPU = prepareDispatch(root, (x) => {
'kernel';
randf.seed2(d.vec2f(d.f32(x), seedUniform.$));
const data = ModelData({
Expand All @@ -124,7 +124,7 @@ const randomizeFishPositionsDispatch = prepareDispatch(root, (x) => {

const randomizeFishPositions = () => {
seedUniform.write((performance.now() % 10000) / 10000);
randomizeFishPositionsDispatch(p.fishAmount);
randomizeFishPositionsOnGPU.dispatch(p.fishAmount);
enqueuePresetChanges();
};

Expand Down
49 changes: 40 additions & 9 deletions apps/typegpu-docs/src/examples/tests/dispatch/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ async function test0d(): Promise<boolean> {
prepareDispatch(root, () => {
'kernel';
mutable.$ = 126;
})();
}).dispatch();
const filled = await mutable.read();
return isEqual(filled, 126);
}
Expand All @@ -27,7 +27,7 @@ async function test1d(): Promise<boolean> {
prepareDispatch(root, (x) => {
'kernel';
mutable.$[x] = x;
})(...size);
}).dispatch(...size);
const filled = await mutable.read();
return isEqual(filled, [0, 1, 2, 3, 4, 5, 6]);
}
Expand All @@ -40,7 +40,7 @@ async function test2d(): Promise<boolean> {
prepareDispatch(root, (x, y) => {
'kernel';
mutable.$[x][y] = d.vec2u(x, y);
})(...size);
}).dispatch(...size);
const filled = await mutable.read();
return isEqual(filled, [
[d.vec2u(0, 0), d.vec2u(0, 1), d.vec2u(0, 2)],
Expand All @@ -59,7 +59,7 @@ async function test3d(): Promise<boolean> {
prepareDispatch(root, (x, y, z) => {
'kernel';
mutable.$[x][y][z] = d.vec3u(x, y, z);
})(...size);
}).dispatch(...size);
const filled = await mutable.read();
return isEqual(filled, [
[[d.vec3u(0, 0, 0), d.vec3u(0, 0, 1)]],
Expand All @@ -72,7 +72,7 @@ async function testWorkgroupSize(): Promise<boolean> {
prepareDispatch(root, (x, y, z) => {
'kernel';
std.atomicAdd(mutable.$, 1);
})(4, 3, 2);
}).dispatch(4, 3, 2);
const filled = await mutable.read();
return isEqual(filled, 4 * 3 * 2);
}
Expand All @@ -81,17 +81,47 @@ async function testMultipleDispatches(): Promise<boolean> {
const size = [7] as const;
const mutable = root
.createMutable(d.arrayOf(d.u32, size[0]), [0, 1, 2, 3, 4, 5, 6]);
const dispatch = prepareDispatch(root, (x: number) => {
const test = prepareDispatch(root, (x: number) => {
'kernel';
mutable.$[x] *= 2;
});
dispatch(6);
dispatch(2);
dispatch(4);
test.dispatch(6);
test.dispatch(2);
test.dispatch(4);
const filled = await mutable.read();
return isEqual(filled, [0 * 8, 1 * 8, 2 * 4, 3 * 4, 4 * 2, 5 * 2, 6 * 1]);
}

async function testDifferentBindGroups(): Promise<boolean> {
const layout = tgpu.bindGroupLayout({
buffer: { storage: d.arrayOf(d.u32), access: 'mutable' },
});
const buffer1 = root
.createBuffer(d.arrayOf(d.u32, 3), [1, 2, 3]).$usage('storage');
const buffer2 = root
.createBuffer(d.arrayOf(d.u32, 4), [2, 4, 8, 16]).$usage('storage');
const bindGroup1 = root.createBindGroup(layout, {
buffer: buffer1,
});
const bindGroup2 = root.createBindGroup(layout, {
buffer: buffer2,
});

const test = prepareDispatch(root, () => {
'kernel';
for (let i = d.u32(); i < std.arrayLength(layout.$.buffer); i++) {
layout.$.buffer[i] *= 2;
}
});

test.with(layout, bindGroup1).dispatch();
test.with(layout, bindGroup2).dispatch();

const filled1 = await buffer1.read();
const filled2 = await buffer2.read();
return isEqual(filled1, [2, 4, 6]) && isEqual(filled2, [4, 8, 16, 32]);
}

async function runTests(): Promise<boolean> {
let result = true;
result = await test0d() && result;
Expand All @@ -100,6 +130,7 @@ async function runTests(): Promise<boolean> {
result = await test3d() && result;
result = await testWorkgroupSize() && result;
result = await testMultipleDispatches() && result;
result = await testDifferentBindGroups() && result;
return result;
}

Expand Down
33 changes: 16 additions & 17 deletions apps/typegpu-docs/src/examples/tests/log-test/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -16,21 +16,21 @@ export const controls = {
prepareDispatch(root, () => {
'kernel';
console.log(d.u32(321));
})(),
}).dispatch(),
},
'Multiple arguments': {
onButtonClick: () =>
prepareDispatch(root, () => {
'kernel';
console.log(d.u32(1), d.vec3u(2, 3, 4), d.u32(5), d.u32(6));
})(),
}).dispatch(),
},
'String literals': {
onButtonClick: () =>
prepareDispatch(root, () => {
'kernel';
console.log(d.u32(2), 'plus', d.u32(3), 'equals', d.u32(5));
})(),
}).dispatch(),
},
'Different types': {
onButtonClick: () =>
Expand All @@ -41,50 +41,50 @@ export const controls = {
console.log(d.vec2u(1, 2));
console.log(d.vec3u(1, 2, 3));
console.log(d.vec4u(1, 2, 3, 4));
})(),
}).dispatch(),
},
'Two logs': {
onButtonClick: () =>
prepareDispatch(root, () => {
'kernel';
console.log('First log.');
console.log('Second log.');
})(),
}).dispatch(),
},
'Two threads': {
onButtonClick: () =>
prepareDispatch(root, (x) => {
'kernel';
console.log('Log from thread', x);
})(2),
}).dispatch(2),
},
'100 dispatches': {
onButtonClick: async () => {
const indexUniform = root.createUniform(d.u32);
const dispatch = prepareDispatch(root, () => {
const test = prepareDispatch(root, () => {
'kernel';
console.log('Log from dispatch', indexUniform.$);
});
for (let i = 0; i < 100; i++) {
indexUniform.write(i);
dispatch();
test.dispatch();
console.log(`dispatched ${i}`);
}
},
},
'Varying size logs': {
onButtonClick: async () => {
const logCountUniform = root.createUniform(d.u32);
const dispatch = prepareDispatch(root, () => {
const test = prepareDispatch(root, () => {
'kernel';
for (let i = d.u32(); i < logCountUniform.$; i++) {
console.log('Log index', d.u32(i) + 1, 'out of', logCountUniform.$);
}
});
logCountUniform.write(3);
dispatch();
test.dispatch();
logCountUniform.write(1);
dispatch();
test.dispatch();
},
},
'Render pipeline': {
Expand Down Expand Up @@ -142,16 +142,15 @@ export const controls = {
console.log('Log 1 from thread', x);
console.log('Log 2 from thread', x);
console.log('Log 3 from thread', x);
})(16),
}).dispatch(16),
},
'Too much data': {
onButtonClick: () => {
const dispatch = prepareDispatch(root, () => {
'kernel';
console.log(d.vec3u(), d.vec3u(), d.vec3u());
});
try {
dispatch();
prepareDispatch(root, () => {
'kernel';
console.log(d.vec3u(), d.vec3u(), d.vec3u());
}).dispatch();
} catch (err) {
console.log(err);
}
Expand Down
Loading