Skip to content

Commit d9a4b3f

Browse files
feat: Make prepareDispatch withable (#1728)
1 parent c22bb46 commit d9a4b3f

File tree

10 files changed

+273
-170
lines changed

10 files changed

+273
-170
lines changed

apps/typegpu-docs/src/content/docs/fundamentals/utils.mdx

Lines changed: 52 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -5,39 +5,8 @@ description: A list of various utilities provided by TypeGPU.
55

66
## *prepareDispatch*
77

8-
The `prepareDispatch` function simplifies running simple computations on the GPU.
9-
Under the hood, it wraps the callback in a `TgpuFn`, creates a compute pipeline, and returns a dispatch function to execute it.
10-
11-
This can help reduce serialization overhead when initializing buffers with data.
12-
13-
```ts twoslash
14-
import tgpu, { prepareDispatch } from 'typegpu';
15-
import * as d from 'typegpu/data';
16-
17-
const root = await tgpu.init();
18-
19-
const Boid = d.struct({
20-
index: d.u32,
21-
pos: d.vec3f,
22-
});
23-
24-
// buffer of 2048 Boids
25-
const boidsMutable = root.createMutable(d.arrayOf(Boid, 2048));
26-
27-
const dispatch = prepareDispatch(root, (x) => {
28-
'kernel';
29-
const boidData = Boid({ index: x, pos: d.vec3f() });
30-
boidsMutable.$[x] = boidData;
31-
});
32-
// run callback for each x in range 0..2047
33-
dispatch(2048);
34-
```
35-
36-
:::note
37-
Remember to mark the callback with `'kernel'` directive to let TypeGPU know that this function is TGSL.
38-
:::
39-
40-
The returned dispatch function can be called multiple times.
8+
The `prepareDispatch` function streamlines running simple computations on the GPU.
9+
Under the hood, it wraps the callback in a `TgpuFn`, creates a compute pipeline, and returns an object with dispatch method that executes the pipeline.
4110
Since the pipeline is reused, there’s no additional overhead for subsequent calls.
4211

4312
```ts twoslash
@@ -52,16 +21,20 @@ const doubleUp = prepareDispatch(root, (x) => {
5221
data.$[x] *= 2;
5322
});
5423

55-
doubleUp(8);
56-
doubleUp(8);
57-
doubleUp(4);
24+
doubleUp.dispatch(8);
25+
doubleUp.dispatch(8);
26+
doubleUp.dispatch(4);
5827

59-
// no need to call `onSubmittedWorkDone()` because the command encoder
60-
// will queue the read after `doubleUp` anyway
28+
// the command encoder will queue the read after `doubleUp`
6129
console.log(await data.read()); // [0, 8, 16, 24, 16, 20, 24, 28]
6230
```
6331

32+
:::note
33+
Remember to mark the callback with `'kernel'` directive to let TypeGPU know that this function is TGSL.
34+
:::
35+
6436
The callback can have up to three arguments (dimensions).
37+
`prepareDispatch` can simplify writing a pipeline helping reduce serialization overhead when initializing buffers with data.
6538
Buffer initialization commonly uses random number generators.
6639
For that, you can use the [`@typegpu/noise`](TypeGPU/ecosystem/typegpu-noise) library.
6740

@@ -82,22 +55,56 @@ prepareDispatch(root, (x, y) => {
8255
'kernel';
8356
randf.seed2(d.vec2f(x, y).div(1024));
8457
waterLevelMutable.$[x][y] = 10 + randf.sample();
85-
})(1024, 512);
58+
}).dispatch(1024, 512);
8659
// callback will be called for x in range 0..1023 and y in range 0..511
8760

8861
// (optional) read values in JS
8962
console.log(await waterLevelMutable.read());
9063
```
9164

92-
It is highly recommended NOT to use `dispatch` for:
65+
Analogously to `TgpuComputePipeline`, the result of `prepareDispatch` can have bind groups bound using the `with` method.
66+
67+
```ts twoslash
68+
import tgpu, { prepareDispatch } from 'typegpu';
69+
import * as d from 'typegpu/data';
70+
import * as std from 'typegpu/std';
71+
const root = await tgpu.init();
72+
// ---cut---
73+
const layout = tgpu.bindGroupLayout({
74+
buffer: { storage: d.arrayOf(d.u32), access: 'mutable' },
75+
});
76+
const buffer1 = root
77+
.createBuffer(d.arrayOf(d.u32, 3), [1, 2, 3]).$usage('storage');
78+
const buffer2 = root
79+
.createBuffer(d.arrayOf(d.u32, 4), [2, 4, 8, 16]).$usage('storage');
80+
const bindGroup1 = root.createBindGroup(layout, {
81+
buffer: buffer1,
82+
});
83+
const bindGroup2 = root.createBindGroup(layout, {
84+
buffer: buffer2,
85+
});
86+
87+
const test = prepareDispatch(root, (x) => {
88+
'kernel';
89+
layout.$.buffer[x] *= 2;
90+
});
91+
92+
test.with(layout, bindGroup1).dispatch(3);
93+
test.with(layout, bindGroup2).dispatch(4);
94+
95+
console.log(await buffer1.read()); // [2, 4, 6];
96+
console.log(await buffer2.read()); // [4, 8, 16, 32];
97+
```
98+
99+
It is recommended NOT to use `prepareDispatch` for:
93100

94101
- More complex compute shaders.
95-
When using `dispatch`, it is impossible to switch bind groups or to change workgroup sizes.
102+
When using `prepareDispatch`, it is impossible to change workgroup sizes or to use [slots](/TypeGPU/fundamentals/slots).
96103
For such cases, a manually created pipeline would be more suitable.
97104

98105
- Small calls.
99106
Usually, for small data the shader creation and dispatch is more costly than serialization.
100-
Small buffers can be more efficiently initialized with `buffer.write()` method.
107+
Small buffers can be more efficiently initialized with the `buffer.write()` method.
101108

102109
:::note
103110
The default workgroup sizes are:
@@ -122,14 +129,14 @@ import * as d from 'typegpu/data';
122129
const root = await tgpu.init();
123130
// ---cut---
124131
const callCountMutable = root.createMutable(d.u32, 0);
125-
const dispatch = prepareDispatch(root, () => {
132+
const compute = prepareDispatch(root, () => {
126133
'kernel';
127134
callCountMutable.$ += 1;
128135
console.log('Call number', callCountMutable.$);
129136
});
130137

131-
dispatch();
132-
dispatch();
138+
compute.dispatch();
139+
compute.dispatch();
133140

134141
// Eventually...
135142
// "[GPU] Call number 1"

apps/typegpu-docs/src/examples/rendering/3d-fish/index.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ function enqueuePresetChanges() {
9898
const buffer0mutable = fishDataBuffers[0].as('mutable');
9999
const buffer1mutable = fishDataBuffers[1].as('mutable');
100100
const seedUniform = root.createUniform(d.f32);
101-
const randomizeFishPositionsDispatch = prepareDispatch(root, (x) => {
101+
const randomizeFishPositionsOnGPU = prepareDispatch(root, (x) => {
102102
'kernel';
103103
randf.seed2(d.vec2f(d.f32(x), seedUniform.$));
104104
const data = ModelData({
@@ -124,7 +124,7 @@ const randomizeFishPositionsDispatch = prepareDispatch(root, (x) => {
124124

125125
const randomizeFishPositions = () => {
126126
seedUniform.write((performance.now() % 10000) / 10000);
127-
randomizeFishPositionsDispatch(p.fishAmount);
127+
randomizeFishPositionsOnGPU.dispatch(p.fishAmount);
128128
enqueuePresetChanges();
129129
};
130130

apps/typegpu-docs/src/examples/simple/increment/index.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,14 @@ const root = await tgpu.init();
66
const counter = root.createMutable(d.u32);
77

88
// A 0-dimentional shader function
9-
const dispatch = prepareDispatch(root, () => {
9+
const incrementKernel = prepareDispatch(root, () => {
1010
'kernel';
1111
counter.$ += 1;
1212
});
1313

1414
async function increment() {
1515
// Dispatch and read the result
16-
dispatch();
16+
incrementKernel.dispatch();
1717
return await counter.read();
1818
}
1919

apps/typegpu-docs/src/examples/tests/dispatch/index.ts

Lines changed: 40 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ async function test0d(): Promise<boolean> {
1616
prepareDispatch(root, () => {
1717
'kernel';
1818
mutable.$ = 126;
19-
})();
19+
}).dispatch();
2020
const filled = await mutable.read();
2121
return isEqual(filled, 126);
2222
}
@@ -27,7 +27,7 @@ async function test1d(): Promise<boolean> {
2727
prepareDispatch(root, (x) => {
2828
'kernel';
2929
mutable.$[x] = x;
30-
})(...size);
30+
}).dispatch(...size);
3131
const filled = await mutable.read();
3232
return isEqual(filled, [0, 1, 2, 3, 4, 5, 6]);
3333
}
@@ -40,7 +40,7 @@ async function test2d(): Promise<boolean> {
4040
prepareDispatch(root, (x, y) => {
4141
'kernel';
4242
mutable.$[x][y] = d.vec2u(x, y);
43-
})(...size);
43+
}).dispatch(...size);
4444
const filled = await mutable.read();
4545
return isEqual(filled, [
4646
[d.vec2u(0, 0), d.vec2u(0, 1), d.vec2u(0, 2)],
@@ -59,7 +59,7 @@ async function test3d(): Promise<boolean> {
5959
prepareDispatch(root, (x, y, z) => {
6060
'kernel';
6161
mutable.$[x][y][z] = d.vec3u(x, y, z);
62-
})(...size);
62+
}).dispatch(...size);
6363
const filled = await mutable.read();
6464
return isEqual(filled, [
6565
[[d.vec3u(0, 0, 0), d.vec3u(0, 0, 1)]],
@@ -72,7 +72,7 @@ async function testWorkgroupSize(): Promise<boolean> {
7272
prepareDispatch(root, (x, y, z) => {
7373
'kernel';
7474
std.atomicAdd(mutable.$, 1);
75-
})(4, 3, 2);
75+
}).dispatch(4, 3, 2);
7676
const filled = await mutable.read();
7777
return isEqual(filled, 4 * 3 * 2);
7878
}
@@ -81,17 +81,47 @@ async function testMultipleDispatches(): Promise<boolean> {
8181
const size = [7] as const;
8282
const mutable = root
8383
.createMutable(d.arrayOf(d.u32, size[0]), [0, 1, 2, 3, 4, 5, 6]);
84-
const dispatch = prepareDispatch(root, (x: number) => {
84+
const test = prepareDispatch(root, (x: number) => {
8585
'kernel';
8686
mutable.$[x] *= 2;
8787
});
88-
dispatch(6);
89-
dispatch(2);
90-
dispatch(4);
88+
test.dispatch(6);
89+
test.dispatch(2);
90+
test.dispatch(4);
9191
const filled = await mutable.read();
9292
return isEqual(filled, [0 * 8, 1 * 8, 2 * 4, 3 * 4, 4 * 2, 5 * 2, 6 * 1]);
9393
}
9494

95+
async function testDifferentBindGroups(): Promise<boolean> {
96+
const layout = tgpu.bindGroupLayout({
97+
buffer: { storage: d.arrayOf(d.u32), access: 'mutable' },
98+
});
99+
const buffer1 = root
100+
.createBuffer(d.arrayOf(d.u32, 3), [1, 2, 3]).$usage('storage');
101+
const buffer2 = root
102+
.createBuffer(d.arrayOf(d.u32, 4), [2, 4, 8, 16]).$usage('storage');
103+
const bindGroup1 = root.createBindGroup(layout, {
104+
buffer: buffer1,
105+
});
106+
const bindGroup2 = root.createBindGroup(layout, {
107+
buffer: buffer2,
108+
});
109+
110+
const test = prepareDispatch(root, () => {
111+
'kernel';
112+
for (let i = d.u32(); i < std.arrayLength(layout.$.buffer); i++) {
113+
layout.$.buffer[i] *= 2;
114+
}
115+
});
116+
117+
test.with(layout, bindGroup1).dispatch();
118+
test.with(layout, bindGroup2).dispatch();
119+
120+
const filled1 = await buffer1.read();
121+
const filled2 = await buffer2.read();
122+
return isEqual(filled1, [2, 4, 6]) && isEqual(filled2, [4, 8, 16, 32]);
123+
}
124+
95125
async function runTests(): Promise<boolean> {
96126
let result = true;
97127
result = await test0d() && result;
@@ -100,6 +130,7 @@ async function runTests(): Promise<boolean> {
100130
result = await test3d() && result;
101131
result = await testWorkgroupSize() && result;
102132
result = await testMultipleDispatches() && result;
133+
result = await testDifferentBindGroups() && result;
103134
return result;
104135
}
105136

apps/typegpu-docs/src/examples/tests/log-test/index.ts

Lines changed: 16 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -20,29 +20,29 @@ export const controls = {
2020
prepareDispatch(root, () => {
2121
'kernel';
2222
console.log(d.u32(321));
23-
})(),
23+
}).dispatch(),
2424
},
2525
'Multiple arguments': {
2626
onButtonClick: () =>
2727
prepareDispatch(root, () => {
2828
'kernel';
2929
console.log(d.u32(1), d.vec3u(2, 3, 4), d.u32(5), d.u32(6));
30-
})(),
30+
}).dispatch(),
3131
},
3232
'String literals': {
3333
onButtonClick: () =>
3434
prepareDispatch(root, () => {
3535
'kernel';
3636
console.log(d.u32(2), 'plus', d.u32(3), 'equals', d.u32(5));
37-
})(),
37+
}).dispatch(),
3838
},
3939
'Two logs': {
4040
onButtonClick: () =>
4141
prepareDispatch(root, () => {
4242
'kernel';
4343
console.log('First log.');
4444
console.log('Second log.');
45-
})(),
45+
}).dispatch(),
4646
},
4747
'Different types': {
4848
onButtonClick: () =>
@@ -86,42 +86,42 @@ export const controls = {
8686
} else {
8787
console.log("The 'shader-f16' flag is not enabled.");
8888
}
89-
})(),
89+
}).dispatch(),
9090
},
9191
'Two threads': {
9292
onButtonClick: () =>
9393
prepareDispatch(root, (x) => {
9494
'kernel';
9595
console.log('Log from thread', x);
96-
})(2),
96+
}).dispatch(2),
9797
},
9898
'100 dispatches': {
9999
onButtonClick: async () => {
100100
const indexUniform = root.createUniform(d.u32);
101-
const dispatch = prepareDispatch(root, () => {
101+
const test = prepareDispatch(root, () => {
102102
'kernel';
103103
console.log('Log from dispatch', indexUniform.$);
104104
});
105105
for (let i = 0; i < 100; i++) {
106106
indexUniform.write(i);
107-
dispatch();
107+
test.dispatch();
108108
console.log(`dispatched ${i}`);
109109
}
110110
},
111111
},
112112
'Varying size logs': {
113113
onButtonClick: async () => {
114114
const logCountUniform = root.createUniform(d.u32);
115-
const dispatch = prepareDispatch(root, () => {
115+
const test = prepareDispatch(root, () => {
116116
'kernel';
117117
for (let i = d.u32(); i < logCountUniform.$; i++) {
118118
console.log('Log index', d.u32(i) + 1, 'out of', logCountUniform.$);
119119
}
120120
});
121121
logCountUniform.write(3);
122-
dispatch();
122+
test.dispatch();
123123
logCountUniform.write(1);
124-
dispatch();
124+
test.dispatch();
125125
},
126126
},
127127
'Render pipeline': {
@@ -179,16 +179,15 @@ export const controls = {
179179
console.log('Log 1 from thread', x);
180180
console.log('Log 2 from thread', x);
181181
console.log('Log 3 from thread', x);
182-
})(16),
182+
}).dispatch(16),
183183
},
184184
'Too much data': {
185185
onButtonClick: () => {
186-
const dispatch = prepareDispatch(root, () => {
187-
'kernel';
188-
console.log(d.mat4x4f(), d.mat4x4f(), 1);
189-
});
190186
try {
191-
dispatch();
187+
prepareDispatch(root, () => {
188+
'kernel';
189+
console.log(d.vec3u(), d.vec3u(), d.vec3u());
190+
}).dispatch();
192191
} catch (err) {
193192
console.log(err);
194193
}

0 commit comments

Comments
 (0)