Skip to content
This repository was archived by the owner on Nov 15, 2023. It is now read-only.
Merged
Changes from 1 commit
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
24cf315
Create a macro which automates creation of benchmark test suites.
coriolinus Feb 11, 2021
0209e4d
bump impl_version
coriolinus Feb 11, 2021
df14bba
allow unused on test_bench_by_name
coriolinus Feb 11, 2021
dacbe67
use proper doctest ignore attribute
coriolinus Feb 11, 2021
47c795f
Explicitly hand the Module to the test suite
coriolinus Feb 12, 2021
4b3be35
explicitly import what we need into `mod tests`
coriolinus Feb 12, 2021
9577ee7
bench_module is `ident` not `tt`
coriolinus Feb 12, 2021
a6cf5d4
allow end users to specify arguments for new_test_ext
coriolinus Feb 12, 2021
ebf64f0
enable explicitly specifying the path to the benchmarks invocation
coriolinus Feb 12, 2021
e0444d8
Revert "bump impl_version"
coriolinus Feb 12, 2021
cb6a33d
list failing benchmark tests and the errors which caused the failure
coriolinus Feb 12, 2021
fd71312
harden benchmark tests against internal panics
coriolinus Feb 12, 2021
7169e83
suppress warning about ignored profiles
coriolinus Feb 15, 2021
c382fb2
impl_benchmark_test_suite for assets
coriolinus Feb 15, 2021
bd39a02
impl_benchmark_test_suite for balances
coriolinus Feb 15, 2021
ebcf6d8
impl_benchmark_test_suite for bounties
coriolinus Feb 15, 2021
e0839c2
impl_benchmark_test_suite for Collective
coriolinus Feb 15, 2021
5e45e40
impl_benchmark_test_suite for Contracts
coriolinus Feb 15, 2021
5c9f586
impl_benchmark_test_suite for Democracy
coriolinus Feb 15, 2021
2730e07
don't impl_benchmark_test_suite for Elections-Phragmen
coriolinus Feb 15, 2021
4206191
impl_benchmark_test_suite for Identity
coriolinus Feb 15, 2021
9889fc9
impl_benchmark_test_suite for ImOnline
coriolinus Feb 15, 2021
10079b5
impl_benchmark_test_suite for indices
coriolinus Feb 15, 2021
ffa7693
impl_benchmark_test_suite for lottery
coriolinus Feb 15, 2021
01964c6
impl_benchmark_test_suite for merkle-mountain-range
coriolinus Feb 15, 2021
3ef76f3
impl_benchmark_test_suite for Multisig
coriolinus Feb 15, 2021
deb140f
impl_benchmark_test_suite for offences
coriolinus Feb 15, 2021
c154611
impl_benchmark_test_suite for Proxy
coriolinus Feb 15, 2021
f4cdd67
impl_benchmark_test_suite for scheduler
coriolinus Feb 15, 2021
a350c64
impl_benchmark_test_suite for session
coriolinus Feb 15, 2021
6efac34
impl_benchmark_test_suite for staking
coriolinus Feb 15, 2021
3e14cd2
impl_benchmark_test_suite for system
coriolinus Feb 15, 2021
681deaf
impl_benchmark_test_suite for timestamp
coriolinus Feb 15, 2021
4e03f34
impl_benchmark_test_suite for tips
coriolinus Feb 15, 2021
82f2a0d
impl_benchmark_test_suite for treasury
coriolinus Feb 15, 2021
7fcb460
impl_benchmark_test_suite for utility
coriolinus Feb 15, 2021
db62d9f
impl_benchmark_test_suite for vesting
coriolinus Feb 15, 2021
5341906
fix wrong module name in impl_benchmark_test_suite in Offences
coriolinus Feb 15, 2021
1732671
address line length nits
coriolinus Feb 15, 2021
f1859ee
enable optional keyword argument: exec_name
coriolinus Feb 15, 2021
f3d2f8f
get rid of dead code
coriolinus Feb 15, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
explicitly import what we need into mod tests
  • Loading branch information
coriolinus committed Feb 12, 2021
commit 4b3be3576e2c5f2670725d7da61d338a671152ab
4 changes: 2 additions & 2 deletions frame/benchmarking/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -993,14 +993,14 @@ macro_rules! impl_benchmark_test_suite {
($bench_module:tt, $new_test_ext:path, $test:path) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
($bench_module:tt, $new_test_ext:path, $test:path) => {
($bench_module:ident, $new_test_ext:path, $test:path) => {

Maybe we can use ident instead of tt.
Also here I guess we can't make it a path because later we do: $bench_module::<$test>::benchmarks and macro is failing to concatenate path somehow, isn't it?
If this is the reason then we can consider moving to procedural macro also.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's precisely why we can't use path. I'll try it with ident. I'd prefer not to move to a procedural macro, if possible.

Copy link
Contributor

@gui1117 gui1117 Feb 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not moving to procedural ?
I actually prefer them, it is easier to extend and has less weird stuff like this
cc @shawntabrizi

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main objection to procedural macros is that they're a heavy hammer: they take much longer to develop (at my level of experience) than macros-by-example. There is also some weirdness about requiring their own specially-configured crate. It doesn't feel worth it for a macro as small as impl_benchmark_test_suite currently is.

However, as I wrote in #8104 (comment), the only way I can see to get a single test case per benchmark is to merge the functionality into the benchmarks macro. Once I'm at that level of rewriting, I agree: it's worth the time to refactor the existing functionality into a procedural macro because doing so will be easier than substantially modifying a huge recursive macro by example such as currently exists.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My 2 cents is that once you get the initial scaffolding there, proc-marcos are well worth the extra time in the long run. For this use case, I'd admit that I would personally go with them, but can totally agree with that it is simpler to start with a simple macro_rules.

#[cfg(test)]
mod tests {
use super::*;
use super::{test_bench_by_name, $bench_module};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_bench_by_name is still used implicitly by macro

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I'm not sure that we want to make this explicit: the function is ultimately generated by the benchmarks! macro, and isn't visible to the user unless they do something like cargo doc --all-features. Passing it in explicitly gives the user more flexibility in terms of putting the impl_benchmark_test_suite invocation somewhere other than the benchmarks invocation, but if we require the user to provide it, it'll just be noise an easy majority of the time.

What do you think of this: we could add an optional parameter $path_to_benchmarks_invocation:path which defaults to super. We then use $path_to_benchmarks_invocation::test_bench_by_name;. It's still kind of implicit, but when the user needs to deal with it, it's less abstruse; they just fill in the path to the module in which they invoked benchmarks!.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another idea would be to move impl_benchmark_test_suite inside benchmark macro itself as an optional final argument.
I don't really now what is best

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ebf64f0 has what I was talking about. I think that's more usable than requiring the end-user to specify a generated function.

use $crate::frame_support::assert_ok;

#[test]
fn test_benchmarks() {
$new_test_ext().execute_with(|| {
use $crate::Benchmarking;
for benchmark_name in $bench_module::<$test>::benchmarks(true) {
for benchmark_name in $bench_module ::<$test>::benchmarks(true) {
assert_ok!(test_bench_by_name::<$test>(benchmark_name));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am concerned with this approach if one of the benchmarks fails, it will be hard to identify which benchmark is the issue.

I would prefer we actually generate N tests in this macro where each benchmark gets its own entire function, and thus when it fails, it is easy to identify which test failed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is critical for debugging during benchmark development.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer we actually generate N tests in this macro where each benchmark gets its own entire function

I'd prefer that too. Unfortunately, it's hard: we don't have a simple way to iterate over the names of the benchmarks within the macro scope which generates the test module, which is why we introduce the test_bench_by_name function at all.

I'll think some more about how to accomplish this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moving the assert to inside test_bench_by_name should be equivalent to what was before no ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No: a failing assertion within test_bench_by_name still fails the single test without testing anything else, or propagating information about which test failed.

A short-term hack for identifying failing benchmarks would be to simply print the current benchmark to stdout: you can identify the failing benchmark because its name will be the last printed. On success, cargo test will just swallow stdout.

However, doing this properly involves building the functionality of impl_benchmark_test_suite directly into the benchmarks macro; there just isn't another way that I can see to generate individual test cases without requiring the user to explicitly enumerate the generated test functions. Obviously, explicitly enumerating those is worse than the status quo: it retains the boilerplate, but makes it less clear what is being accomplished.

If I have to substantially modify the benchmarks macro, I'd like to rewrite it as a procedural macro, because that's just much easier to understand than some 900 lines of interlinked recursive example macros as currently exist. However, that feels like kind of an enormous job. I can say that it'll take me a fair while to complete it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before I start on a huge project rewriting the benchmarks macro, what do you think of cb6a33d? It fails at the goal of generating a single test case per benchmark test, but it succeeds at the goals of running all the tests, and providing a list of failing tests, and what the failures were.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is hard for me to tell without a UI test, but I can be fine with it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm actually a lot of code in benchmark would simply do assert and unwrap instead of returning error.
For them maybe we can print the name of the test before its exeuction

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. fd71312

}
});
Expand Down