Benchmarking data: CTest or something else?

I’ve successfully gotten my CDash dashboard up and running with data from CTest, and now I’m thinking about creating benchmarks for my library.

I’m wondering what the typical approach is to handling benchmarks within the workflow of a CMake based project. Do you register your benchmark runners as tests with CTest? Maybe fail those tests if a significant regression has occurred? Or is there another harness you would use to run benchmarks and keep track of their data?

I appreciate any input!

I don’t think CTest is well-suited for benchmarking, but if I were to do it, I’d mark the test as RUN_SERIAL and use hyperfine to perform tests with a wrapper script to analyze and make a pass/fail determination. Note that you might need a fixture to gather baseline data to account for the differences in, say, a developer workstation and a Raspberry Pi.

1 Like

Thank you for introducing me to hyperfine, it looks like a very helpful tool!

I figured as much, that I would end up writing some kind of CMake driver script that gets executed either as a test or a custom target.

Thank you for the info!