Skip to content

Commit 7be4bdf

Browse files
bench: add benchmark running script that recreates the published benchmarks (#267)
* feat: add benchmark running script that recreates the published benchmarks * refactor: add error handling to run_command
1 parent 73d66a0 commit 7be4bdf

2 files changed

Lines changed: 58 additions & 4 deletions

File tree

benchmark/README.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,17 @@
22

33
## Description
44

5-
Given the number of each commitment length, the total number of commitments, and the number of bytes used to represent data elements, we compute each commitment sequence, using for that curve25519 and ristretto group operations. The data elements are randomly generated (in a deterministic way, given that the same seed is always provided).
5+
Given the number of each commitment length, the total number of commitments, and the number of bytes used to represent data elements, we compute each commitment sequence, using for that Curve25519 and Ristretto group operations. The data elements are randomly generated (in a deterministic way, given that the same seed is always provided).
66

77
## Running the benchmarks
88

9-
To run the whole benchmark on the GPU (update accordingly to use CPU), execute:
9+
To run the whole benchmark suite from outside of the NIX development shell execute:
1010

1111
```
12-
docker run --rm -e TEST_TMPDIR=/root/.cache_bazel -v blitzar/benchmark/multi_commitment:/root/.cache_bazel -v "$PWD":/src -w /src --gpus all --privileged -it joestifler/blitzar:7.0 benchmark/multi_commitment/scripts/run_benchmark.py --backend gpu --output-dir benchmark/multi_commitment/.proof_results --force-rerun-bench 1 --run-bench-callgrind 1 --run-bench 1
12+
nix develop --command python3 ./benchmark/scripts/run_benchmarks.py <cpu|gpu>
1313
```
1414

15-
Some files are generated in this process. They can be found on `benchmark/multi_commitment/.proof_results/` directory.
15+
From inside the NIX development shell execute:
16+
```
17+
python3 ./benchmark/scripts/run_benchmarks.py <cpu|gpu>
18+
```
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
#!/usr/bin/env python3
2+
3+
import sys
4+
import subprocess
5+
import argparse
6+
7+
8+
def run_command(cmd):
9+
"""Run a shell command and print output"""
10+
print(f"Running: {' '.join(cmd)}")
11+
try:
12+
subprocess.run(cmd, check=True)
13+
except subprocess.CalledProcessError as e:
14+
print(f"Error: Command '{' '.join(cmd)}' failed with return code {e.returncode}")
15+
print(f"Output: {e.output.decode('utf-8') if e.output else 'No output'}")
16+
sys.exit(1)
17+
18+
def main():
19+
# Set up argument parsing
20+
parser = argparse.ArgumentParser(description='Run Blitzar benchmarks')
21+
parser.add_argument('device', choices=['cpu', 'gpu'], help='Device to run benchmarks on')
22+
args = parser.parse_args()
23+
24+
# Define benchmark parameters
25+
sizes = [10000, 100000, 1000000]
26+
num_samples = 10
27+
num_commitments = [1, 10]
28+
element_nbytes = [1, 32]
29+
verbose = 0
30+
31+
# Multi commitment benchmark
32+
for commitment in num_commitments:
33+
for nbytes in element_nbytes:
34+
for size in sizes:
35+
cmd = [
36+
"bazel", "run", "-c", "opt", "//benchmark/multi_commitment:benchmark", "--",
37+
args.device, str(size), str(num_samples), str(commitment), str(nbytes), str(verbose)
38+
]
39+
run_command(cmd)
40+
41+
# Inner product proof benchmark
42+
for size in sizes:
43+
cmd = [
44+
"bazel", "run", "-c", "opt", "//benchmark/inner_product_proof:benchmark", "--",
45+
args.device, str(size), str(num_samples)
46+
]
47+
run_command(cmd)
48+
49+
50+
if __name__ == "__main__":
51+
main()

0 commit comments

Comments
 (0)