I’ve successfully gotten my CDash dashboard up and running with data from CTest, and now I’m thinking about creating benchmarks for my library.
I’m wondering what the typical approach is to handling benchmarks within the workflow of a CMake based project. Do you register your benchmark runners as tests with CTest? Maybe fail those tests if a significant regression has occurred? Or is there another harness you would use to run benchmarks and keep track of their data?
I don’t think CTest is well-suited for benchmarking, but if I were to do it, I’d mark the test as RUN_SERIAL and use hyperfine to perform tests with a wrapper script to analyze and make a pass/fail determination. Note that you might need a fixture to gather baseline data to account for the differences in, say, a developer workstation and a Raspberry Pi.