---
tags: sidechains, benchmark
---
# Benchmarks libraries
### Time measurements
#### time
https://ocaml.org/releases/4.12/htmlman/instrumented-runtime.html
#### perf, perf-stat
Run a command and gather performance counter statistics.
https://man7.org/linux/man-pages/man1/perf-stat.1.html
Benchmarks scripts:
https://github.com/andikleen/pmu-tools
https://github.com/ocaml-bench/ocaml_bench_scripts
https://gist.github.com/Dieterbe/a52c95a9603507670eb39274544ee1a8
#### profiling
https://ocaml.org/releases/4.12/htmlman/profil.html
There are two types of profiling that you can do on OCaml programs:
- Get execution counts for bytecode.
- Get real profiling for native code.
1. The `ocamlcp` and `ocamlprof` programs perform profiling on bytecode.
https://ocaml.org/learn/tutorials/performance_and_profiling.html
2. [`gprof`]( https://caml.inria.fr/pub/old_caml_site/ocaml/htmlman/manual031.html) execution of Objective Caml programs can be profiled, by recording how many times functions are called, branches of conditionals are taken...
3. `landmarks`: Performance-monitoring libraries
Landmarks is a simple profiling library for OCaml. It provides primitives to delimit portions of code and measure the performance of instrumented code at runtime. https://github.com/LexiFi/landmarks
4. benchamel-perf: https://docs.ocaml.pro/sources/bechamel-perf.0.1.0/index.html
5. bechamel-notty: https://docs.ocaml.pro/sources/bechamel-notty.0.1.0/index.html
6. orun: https://github.com/ocaml-bench/orun
#### metrics
Metrics provides a basic infrastructure to monitor and gather runtime metrics for OCaml program. Monitoring is performed on sources, indexed by tags, allowing users to enable or disable at runtime the gathering of data-points. As disabled metric sources have a low runtime cost (only a closure allocation), the library is designed to instrument production systems.
https://github.com/mirage/metrics
#### valgrind
For C program. Valgrind is an instrumentation framework for building dynamic analysis tools. There are Valgrind tools that can automatically detect many memory management and threading bugs, and profile your programs in detail. You can also use Valgrind to build new tools.
https://valgrind.org/
---
### Microbenchmarking library
#### core_bench
Source: https://github.com/janestreet/core_bench
https://github.com/janestreet/core_bench/wiki/Getting-Started-with-Core_bench
https://blog.janestreet.com/core_bench-micro-benchmarking-for-ocaml/
#### benchmark
This tool is suitable if we want to measure the run-time of one or many functions using latency (multiple repetitions) or throughput (repeat until some time period has passed) test.
For example:
Run the function `f` with input 5000 for 10 iterations and print the CPU times
`Benchmark.latency1 10 f 5000`
Run the tests `foo, bar and baz` 3 times for at least 8 seconds each, printing the results of each test, and then print a cross tabulation of the result:
```
open Benchmark
let res = throughputN ~repeat:3 8
[("foo", foo, 1000000);
("bar", bar, 2000000);
("baz", baz, 3000000);
] in
print_newline();
tabulate res
```
Measure/compare run-time of OCaml functions.
https://github.com/Chris00/ocaml-benchmark
Doc: https://chris00.github.io/ocaml-benchmark/doc/benchmark/Benchmark/index.html
##### bench
This is a very old library, it seems similar like the benchmark library.
A benchmarking tool for statistically valid benchmarks
https://github.com/thelema/bench/
Example with bench: https://gist.github.com/jj-issuu/8caa9ed31b2f689af96d
#### operf-macro
This is a macro benchmark (ie. the whole program), it can also be used for micro-benchmark like core-bench as well.
A macro-benchmarking suite for OCaml
https://www.typerex.org/operf-macro.html
https://github.com/ocaml-bench/ocaml_bench_scripts
---
### Preprocessor-helped micro benchmarking
#### ppx_bench
Syntax extension for writing in-line benchmarks in OCaml code.
https://github.com/janestreet/ppx_bench
Document of ppx_bench: https://v3.ocaml.org/p/ppx_bench/v0.15.0/doc/index.html
Running example of ppx_bench:
- https://github.com/ocaml/dune/tree/main/bench/micro
- https://github.com/ocaml/dune/issues/65
Issue discussion about ppx_bench: https://github.com/janestreet/core_bench/issues/12
---
### Others
#### Continuous benchmarks: current-bench
This tool is suitable when we need a stable environment to test. It is working similar like github. For example, if we have a function `add` on the main branch and then we do some benchmark on this function, then later on, we do some optimization on the function `add`, and it is on another branch, we then want to compare the benchmark of the add on the main branch and the one which we optimized.
Prototype for running predictable, IO-bound benchmarks in an ocurrent pipeline. This is work in progress.
https://tarides.com/blog/2021-08-26-benchmarking-ocaml-projects-with-current-bench
https://github.com/ocurrent/current-bench
Talk: https://icfp21.sigplan.org/details/ocaml-2021-papers/8/Continuous-Benchmarking-for-Ocaml-Projects
#### Tezos benchmarks
- tezos-snoop: this tool allows to benchmark any given piece of OCaml code and use these measures to fit cost models predictive of execution time. http://tezos.gitlab.io/developer/snoop.html
- Tezos benchmark libs
- reference source https://gitlab.com/tezos/tezos/-/tree/master/src/lib_benchmark
The subdirectory example contains a full example of a benchmark for the blake2 hash function.
- protocol bench: https://gitlab.com/tezos/tezos/-/tree/master/src/proto_alpha/lib_benchmarks_proto
https://gitlab.com/tezos/tezos/-/tree/master/src/proto_alpha/lib_benchmark
#### Sandmark
Sandmark is a suite of OCaml benchmarks and a collection of tools to configure different compiler variants, run and visulaise the results. Sandmark includes both sequential and parallel benchmarks.
https://github.com/ocaml-bench/sandmark
https://github.com/orgs/ocaml-bench/repositories?type=all
https://docs.ocaml.pro/sources.html
---
## Papers
[How not to lie with statistics: the correct way to summarize benchmark results](https://www.cse.unsw.edu.au/~cs9242/19/papers/Fleming_Wallace_86.pdf)