The compare.py
can be used to compare the result of benchmarks.
The utility relies on the scipy package which can be installed using pip:
The switch -a
/ --display_aggregates_only
can be used to control the displayment of the normal iterations vs the aggregates. When passed, it will be passthrough to the benchmark binaries to be run, and will be accounted for in the tool itself; only the aggregates will be displayed, but not normal runs. It only affects the display, the separate runs will still be used to calculate the U test.
There are three modes of operation:
Where <benchmark_baseline>
and <benchmark_contender>
either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
[benchmark options]
will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal --benchmark_*
parameters, or some custom parameters your binary takes.
Example output:
What it does is for the every benchmark from the first run it looks for the benchmark with exactly the same name in the second run, and then compares the results. If the names differ, the benchmark is omitted from the diff. As you can note, the values in Time
and CPU
columns are calculated as (new - old) / |old|
.
Where <benchmark>
either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
Where <filter_baseline>
and <filter_contender>
are the same regex filters that you would pass to the [--benchmark_filter=<regex>]
parameter of the benchmark binary.
[benchmark options]
will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal --benchmark_*
parameters, or some custom parameters your binary takes.
Example output:
As you can see, it applies filter to the benchmarks, both when running the benchmark, and before doing the diff. And to make the diff work, the matches are replaced with some common string. Thus, you can compare two different benchmark families within one benchmark binary. As you can note, the values in Time
and CPU
columns are calculated as (new - old) / |old|
.
Where <benchmark_baseline>
and <benchmark_contender>
either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
Where <filter_baseline>
and <filter_contender>
are the same regex filters that you would pass to the [--benchmark_filter=<regex>]
parameter of the benchmark binary.
[benchmark options]
will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal --benchmark_*
parameters, or some custom parameters your binary takes.
Example output:
This is a mix of the previous two modes, two (potentially different) benchmark binaries are run, and a different filter is applied to each one. As you can note, the values in Time
and CPU
columns are calculated as (new - old) / |old|
.
Performance measurements are an art, and performance comparisons are doubly so. Results are often noisy and don't necessarily have large absolute differences to them, so just by visual inspection, it is not at all apparent if two measurements are actually showing a performance change or not. It is even more confusing with multiple benchmark repetitions.
Thankfully, what we can do, is use statistical tests on the results to determine whether the performance has statistically-significantly changed. compare.py
uses Mann–Whitney U test, with a null hypothesis being that there's no difference in performance.
The below output is a summary of a benchmark comparison with statistics provided for a multi-threaded process.
Here's a breakdown of each row:
benchmark/threads:1/process_time/real_time_pvalue: This shows the p-value for the statistical test comparing the performance of the process running with one thread. A value of 0.0000 suggests a statistically significant difference in performance. The comparison was conducted using the U Test (Mann-Whitney U Test) with 27 repetitions for each case.
benchmark/threads:1/process_time/real_time_mean: This shows the relative difference in mean execution time between two different cases. The negative value (-0.1442) implies that the new process is faster by about 14.42%. The old time was 90 units, while the new time is 77 units.
benchmark/threads:1/process_time/real_time_median: Similarly, this shows the relative difference in the median execution time. Again, the new process is faster by 14.44%.
benchmark/threads:1/process_time/real_time_stddev: This is the relative difference in the standard deviation of the execution time, which is a measure of how much variation or dispersion there is from the mean. A positive value (+0.3974) implies there is more variance in the execution time in the new process.
benchmark/threads:1/process_time/real_time_cv: CV stands for Coefficient of Variation. It is the ratio of the standard deviation to the mean. It provides a standardized measure of dispersion. An increase (+0.6329) indicates more relative variability in the new process.
OVERALL_GEOMEAN: Geomean stands for geometric mean, a type of average that is less influenced by outliers. The negative value indicates a general improvement in the new process. However, given the values are all zero for the old and new times, this seems to be a mistake or placeholder in the output.
Let's first try to see what the different columns represent in the above compare.py
benchmarking output:
In the comparison section, the relative differences in both time and CPU time are displayed for each input size.
A statistically-significant difference is determined by a p-value, which is a measure of the probability that the observed difference could have occurred just by random chance. A smaller p-value indicates stronger evidence against the null hypothesis.
Therefore:
The result of said the statistical test is additionally communicated through color coding:
The benchmarks are statistically different. This could mean the performance has either significantly improved or significantly deteriorated. You should look at the actual performance numbers to see which is the case.
The benchmarks are statistically similar. This means the performance hasn't significantly changed.
In statistical terms, 'green' means we reject the null hypothesis that there's no difference in performance, and 'red' means we fail to reject the null hypothesis. This might seem counter-intuitive if you're expecting 'green' to mean 'improved performance' and 'red' to mean 'worsened performance'.
Also, please note that even if we determine that there is a statistically-significant difference between the two measurements, it does not necessarily mean that the actual benchmarks that were measured are different, or vice versa, even if we determine that there is no statistically-significant difference between the two measurements, it does not necessarily mean that the actual benchmarks that were measured are not different.
If there is a sufficient repetition count of the benchmarks, the tool can do a U Test, of the null hypothesis that it is equally likely that a randomly selected value from one sample will be less than or greater than a randomly selected value from a second sample.
If the calculated p-value is below this value is lower than the significance level alpha, then the result is said to be statistically significant and the null hypothesis is rejected. Which in other words means that the two benchmarks aren't identical.
WARNING: requires LARGE (no less than 9) number of repetitions to be meaningful!