Simple build performance comparison

I have used python scripts that are based on the idea at https://mesonbuild.com/Simple-comparison.html to compare the setup and build time of different build tools/generators.

see https://github.com/ClausKlein/build-performance/blob/master/README.rst#the-ninja-based-builds-are-the-best-as-expected

The ninja based builds are the best as expected.

One interesting point is the different size of the generated ninja build files. The meson build generator creates only one and a simpler and clear build.ninja file:

clausklein$ find build -name '*.ninja' -ls
81941300       48 -rw-r--r--    1 clausklein   staff   21306  8 Feb 10:17 build/buildcmakeninja/build.ninja
81941301        8 -rw-r--r--    1 clausklein   staff    2681  8 Feb 10:17 build/buildcmakeninja/rules.ninja
81941494       16 -rw-r--r--    1 clausklein   staff    5686  8 Feb 10:17 build/buildmeson/build.ninja
clausklein$

Interesting but if you want my opinion, the comparison on performance ought to be done on much bigger source tree.
Concerning the size it is probable that meson generated ninja file is simpler. Note however that in your case this does not seem to impair the build time.
In my experience with bigger source tree (+500 targets to build with 1000+ source files) with CMake + ninja, the generation time may be quite long (~ 1min) and the build is as fast as your host can handle in terms of CPU, memory a I/O.

The biggest issue with CMake w.r.t. to big monolithic project is the cost of regeneration time.
This is my own experience though.

meson does this better. see https://github.com/ClausKlein/build-performance/blob/master/README.rst#and-build-performance-with-a-real-project

Not really on my side:

$ python3 ./measure.py jsoncpp
Running command: rm -rf buildcmake && mkdir -p buildcmake && cd buildcmake && CC='ccache gcc' cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DCMAKE_BUILD_TYPE=Debug ..
Running command: cd buildcmake && make -j 2
Running command: cd buildcmake && make -j 2
Running command: cd buildcmake && make -j 2 clean
Running command: cd buildcmake && make -j 2
Running command: rm -rf buildcmakeninja && mkdir -p buildcmakeninja && cd buildcmakeninja && CC='ccache gcc' cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DCMAKE_BUILD_TYPE=Debug -G Ninja ..
Running command: cd buildcmakeninja && ninja -j 2
Running command: cd buildcmakeninja && ninja -j 2
Running command: cd buildcmakeninja && ninja -j 2 clean
Running command: cd buildcmakeninja && ninja -j 2
Running command: rm -rf buildmeson && mkdir -p buildmeson && CC='ccache gcc' meson buildmeson
Running command: ninja -C buildmeson -j 2
Running command: ninja -C buildmeson -j 2
Running command: ninja -C buildmeson -j 2 clean
Running command: ninja -C buildmeson -j 2
cmake-make
 1.784 gen
 4.632 build
 0.081 empty build
 0.187 clean
 4.555 rebuild
 11.238 overall
cmake-ninja
 1.350 gen
 4.215 build
 0.004 empty build
 0.008 clean
 4.201 rebuild
 9.778 overall
meson
 0.501 gen
 8.591 build
 0.004 empty build
 0.005 clean
 8.525 rebuild
 17.627 overall

it’s even worse if you bump the number of core used:

$ python3 ./measure.py jsoncpp
Running command: rm -rf buildcmake && mkdir -p buildcmake && cd buildcmake && CC='ccache gcc' cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DCMAKE_BUILD_TYPE=Debug ..
Running command: cd buildcmake && make -j 5
Running command: cd buildcmake && make -j 5
Running command: cd buildcmake && make -j 5 clean
Running command: cd buildcmake && make -j 5
Running command: rm -rf buildcmakeninja && mkdir -p buildcmakeninja && cd buildcmakeninja && CC='ccache gcc' cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DCMAKE_BUILD_TYPE=Debug -G Ninja ..
Running command: cd buildcmakeninja && ninja -j 5
Running command: cd buildcmakeninja && ninja -j 5
Running command: cd buildcmakeninja && ninja -j 5 clean
Running command: cd buildcmakeninja && ninja -j 5
Running command: rm -rf buildmeson && mkdir -p buildmeson && CC='ccache gcc' meson buildmeson
Running command: ninja -C buildmeson -j 5
Running command: ninja -C buildmeson -j 5
Running command: ninja -C buildmeson -j 5 clean
Running command: ninja -C buildmeson -j 5
cmake-make
 1.781 gen
 3.379 build
 0.086 empty build
 0.104 clean
 3.372 rebuild
 8.722 overall
cmake-ninja
 1.374 gen
 2.884 build
 0.004 empty build
 0.008 clean
 3.043 rebuild
 7.313 overall
meson
 0.524 gen
 6.541 build
 0.003 empty build
 0.005 clean
 6.568 rebuild
 13.641 overall

So meson is better for the initial gen time, and generated ninja file size, but it is way slower for initial build of rebuild.

Note that, jsoncpp is a better base code, but he has only 10kloc of C/C++ , which is not that big.
I don’t really know where the differences are (disk I/O may be, ccache size ?) but the current comparison, seems too simple, to give appropriate answer on bigger project (250 kloc like cmake itself) or on bigger machine (20 core+, multi GiB RAM) like the one you usually setup for a fast CI.

You are right, the jsoncpp project was a start, I will continue.

IMHO, with -j 5 your build should be much faster than mine with -j 2

cmake-make
 6.605 gen
 1.294 build
 0.289 empty build
 0.217 clean
 1.044 rebuild
 9.449 overall
cmake-ninja
 3.913 gen
 0.560 build
 0.017 empty build
 0.039 clean
 0.497 rebuild
 5.027 overall
meson
 2.609 gen
 1.221 build
 0.014 empty build
 0.032 clean
 1.146 rebuild
 5.022 overall

Try this branch again, I have changed the project settings to be more comparable. The meson build release, and make includes some CTEST stuff. My setup was not right!

And, do not forget: run the test at least 2 time to fill the ccache.

On my old macbook pro, I have only 2 cores. 12 year old, but it still works (slow) :wink:

Not really, like I said jsoncpp is better but still really small w.r.t. parallel build.
If you dump the ninja dep graph using:

ninja -t graph | dot -o the-build.png -Tpng

you’ll see that the best you can get in a 5 parallels jobs and not for long since the graph
only counts 11 files to build. So you really don’t have enough build parallelism to evaluate parallel build.

In the same way try to:
export NINJA_STATUS=[%f/%t (%r) - %e]
then build with ninja -v -j 5
you’ll see how that very soon you end up with only 2 parallel build jobs.

Don’t get me wrong the comparison is interesting but you won’t certainly get the full power of ninja (neither with CMake nor meson) unless you have a bigger parallel build to do. i.e. 1000+ items to build and mean possible parallelism greater than 8 or more which any average desktop machine will
have nowadays (no offense to your computer or mine).

And yes I warmed-up the ccache before running the tests.

@Eric Noulard Do you have your python script to share? I’m not a python guy but I could throw the test at our project (http://github.com/bluequartzsoftware/dream3d) which has about 1500 files and about 500Kloc to compile. Typical 12C/24T systems take about 20 Minutes to compile it.

ITK/VTK/ParaView would also all be good targets to try this out on.

I used the script from Claus found here:

does build3d build with both meson and CMake or do you simply want to evaluate the build speed
with cmake + ninja?

DREAM3D builds with whatever CMake supports, so no Meson support so maybe not a relevant test.

The script can be used to compare the build time off different build system linke make, ninja, Xcode, with and without ccache …

I used it see what is the fasted generated build solution: make/ninja mason/ninja