PCH: Don't understand that part of the doc

Hi,
I don’t clearly understand that part of the precompiler header:
Public / Private
Specifically that sentence:

A notable exception to this is where an interface library is created to define a commonly used set of precompile headers in one place and then other targets link to that interface library privately

The context:

I am in a situation where I have many small static libraries and I would like to minimize my compile time as much as possible.
Those static libs use the same includes over and over, and maybe I could get some improvements here.

I would like one dummy library to have the precompiled header instead of X times the same headers in each static library.

If in every static libs I do
target_link_libraries(${APP_NAME} PUBLIC path/to/libDummyPCH.a)
do I still benefit from the precompiled header of the static lib?

Which basically means no further effort to create a local precompiled header?

Thanks

This hides the target dependency from CMake. You should probably use the name of the DummyPCH target instead.

Other than that, I’m not too familiar with how PCH works in CMake to be able to answer, sorry. I would guess that PUBLIC is what you want. There’s also this section which seems relevant to your use case.

Thanks Ben,

precompiled headers have some black magic I can’t get through.

I already read that part but it needs a target and at this point I am lost because I don’t know how to provide the library’s one.
And anyway, judging by the size of the static lib (9Mo) for multiple headers when the gch is 700Mo+, lets me think that it won’t work.

I will point to the cmake_pch.hxx file in my files, hoping that the cmake_pch.hxx.gch be picked but I am not sure…

How is DummyPCH.a made? Is it not in the same project with add_library(DummyPCH) or the like?

Here is an excerpt of my cmake

LIB_NAME is the lib I am trying to compile

set(MyPCHpath /path/to/pch.a)
add_library(${LIB_NAME} STATIC ${SOURCES})
add_library(PCH STATIC IMPORTED MyPCHpath)
target_precompile_headers(${APP_NAME} REUSE_FROM PCH)
set_target_properties(${APP_NAME} PROPERTIES LINKER_LANGUAGE CXX)
set_target_properties(PCH PROPERTIES LINKER_LANGUAGE CXX)

but if fails with

PRECOMPILE_HEADERS_REUSE_FROM set with non existing target

I don’t know how to work around that honestly

No, it is not the same project. I have the feeling I am doing it all wrong for some reason.

I just shaved 10s on my build. I won’t reuse the pch file since I don’t know how to do it. But for 20s of libs it means that I will recompile 20s times the same file, that’s a waste;

In my opinion, we should be able to do so:

target_precompile_headers(<target> REUSE_FROM full/path/to/already/compiled/gch)

In which case it would be really handy.

Any opinion on this?

PCH information is not exported, so I don’t know what facilities for importing them would be. This sounds like a new feature to me.

I tried with another lib in the meantime: went from 46s to 8s.

So yes, it is definitely worth it.

Too bad we can’t do anything for it. Except manually changing the cmake compile files to point ot the already compiled pch but it really seems hacky and cumbersome.

/src/main.cpp
/src/lib1.cpp
/src/lib2.cpp
/src/lib3.cpp
/include/config.h
/include/main.h
/include/lib1.h
/include/lib2.h
/include/lib3.h

Given the above, you might want ‘config.h’ to be part of everyone’s PCH. In this scenario, you might create an interface target:

add_library (OurConfig INTERFACE)
target_include_directories(OurConfig ${CMAKE_CURRENT_LIST_DIR}/include)
target_sources (OurConfig ${CMAKE_CURRENT_LIST_DIR}/include/config.h)
target_precompile_headers (OurConfig INTERFACE ${CMAKE_CURRENT_LIST_DIR}/include/config.h)

This “interface library” or “fake target” is just a bag of properties and associations, which you can now use to bring forward the requirement for config.h as part of a PCH:

add_library(lib1 lib1.cpp)
target_link_libraries(lib1 PRIVATE OurConfig)

I’ve made a full example on github ( kfsone/cmake-pch: Example of CMake PCH interfacing (github.com))

cmake_minimum_required (VERSION 3.18)
project (cmake-pch LANGUAGES CXX)

set (CMAKE_CXX_STANDARD 11)
set (CMAKE_CXX_STANDARD_REQUIRED ON)

# Path to our include folder.
set (_inc_dir "${CMAKE_CURRENT_LIST_DIR}/include")

# CMake uses the obscure 'INTERFACE library' as property bags
add_library (project-config INTERFACE)
# These will be inherited by anyone who links against us, which is an oddly build-centric
# way to think about it.
target_include_directories (project-config INTERFACE ${_inc_dir})
# we depend on our source file
target_sources (project-config INTERFACE ${_inc_dir}/config.h)

# and a second one for the pch
add_library (project-config-pch INTERFACE)
# and it gets used as our precompiled header.
target_precompile_headers (project-config-pch INTERFACE ${_inc_dir}/config.h)

# Now for our actual libraries.
add_library (lib1 src/lib1.cpp include/lib1.h)

# Make the association and inherit properties
target_link_libraries (lib1 PUBLIC project-config)
# And privately make use of the pch without exposing it
target_link_libraries (lib1 PRIVATE project-config-pch)

target_precompile_headers (lib1 PRIVATE include/lib1.h)


# And our executable
add_executable (my-exe src/main.cpp include/main.h)

target_precompile_headers (my-exe PRIVATE include/main.h)

# This does not have any effect on our pch
target_link_libraries (my-exe lib1)

# But uncomment this to make the executable also use config.h in its pch.
#target_link_libraries (my-exe project-config-pch)

unset (_inc_dir)
1 Like

Thanks Oliver,

I would have never done that, I will check it

I may have been struggling with a non issue:

Gcc precompiled header doc says:

A precompiled header file is searched for when #include is seen in the compilation. As it searches for the included file (see Search Path in The C Preprocessor) the compiler looks for a precompiled header in each directory just before it looks for the include file in that directory. The name searched for is the name specified in the #include with ‘.gch’ appended. If the precompiled header file cannot be used, it is ignored.

For instance, if you have #include "all.h", and you have all.h.gch in the same directory as all.h, then the precompiled header file is used if possible, and the original header is used otherwise.

which basically translate to: I create a library with the precompiled header, and all my sources needing it will include that pch header and gcc will be smart enough to find the gch which is in the same folder.

For those interested:

Running an i7 8750H and a M.2 SSD SSDPEKNW512G8, 32GB RAM, with 12 threads running 100%.

The timings are underwhelming:
NON SCIENTIFIC but real world comparison on my project :

Project compilation time:
No PCH:
Debug: 25s
Release + LTO: 36s

PCH (precompiled, so no pch compilation during the build)
Debug: 52s
Release + LTO: 45s

The moment I removed the pchs, the build went faster.

Maybe it helps machines with lower spec, not sure, but I won’t bother more.
Forward declarations and clean headers brought a much better experience (-5Go Ram used during the build and a few seconds shaven( approx 4 secs)).

Maybe other people have a similar experience, or completely contradictory, I don’t know.

It entirely depends on what/how you are building. The project I’m currently working on is a mid-sized one, so without PCH it takes about 20 minutes to build the complete set of primary targets on an i7 11k/64gb ddr4 @4k, industrial nvme 3rd gen with visual studio.

Adding the precompiled headers to 20 of the 480 targets reduced the compile time to 6-9 minutes.

pch won’t improve - often worsens - cold build times for relatively small projects, because it still has to do all that compiling.

Whether to use pch or not should start with a little inspection of what exactly you are including. For instance, if C++'s or or have crept into your common headers, they can individually add 10-100ms per compilation unit on gcc/clang, 40-200ms on msvc.

If, on the other hand, you have an excellent separation of concerns so that very few compilation units are including the same headers, PCH won’t help you - infact it will make it worse.

And rarely will putting all your headers into a pch help. What you actually want to target are the headers that are accessed most often and consume the most front-end compile time.

E.g, if you know that every .cpp in your project is going to want , , , , and most of them want , create a “pch.h” file with those listed, and make that your precompiled header file - that will force every compilation unit to precompile it, and then those costly files won’t need to be parsed each time.

Definitely do not add trivial header files:

#pragma once
#ifndef THIS_INCLUDE_FILE
#define THIS_INCLUDE_FILE

namespace Ours
{
enum States
{
  On, Off
};
}

#endif

the cost of pching that will outweigh the cost of including in it upto 12 compilation units.

If your compile times are problematic, you may want to consider a Jumbo/Unity build instead UNITY_BUILD — CMake 3.26.0-rc1 Documentation but again, that has caveats.

A final option you might want to look at is Ccache — Compiler cache and/or distcc: a fast, free distributed C/C++ compiler.

Ccache is a no-brainer on Linux, almost instantaneous benefits for smaller projects. The amount of supervision increases with project scale, although I’ve yet to run into any of the weird edge-cases people see and I’ve worked on some doozies (Blizzard/Facebook/SpaceX).

Ccache has some gotchas on MacOS (it doesn’t do well with dual-architecture, Xcode chose to do it in a specific way which is cache-defeating), and it’s supposed to work on Windows but I’m not sure if that’s mingw specific - I haven’t gotten around to setting it up.

— edit

Addenda: If compilation speed is a major factor for you, you may want to consider golang. The compiler is so fast that you can work on the docker source in vscode yet have the compiler build the entire project fast enough to be how they handle tooltip/intellisense.

Thanks a lot Oliver,

Maybe I did something wrong…
I just added STL includes in the pch … and carefully avoided any of my files.

My project is small/medium (70k LOC, not including 3rd parties), carefully divided in multiple static libraries.

I code like I would in C, no real OOP, free functions instead, in well defined namespaces.

Each library is kept small enough to avoid a high cost of inclusion and templates are avoided as much as I can.

But I agree that some libraries call other ones but that’s the nature of the strategy to avoid repetition/errors.

I really need C++ because the 3rd parties are in C++ as well and I don’t want to bother with C bindings and all.

But yeah… GCC 12.2, Cmake+Ninja and mold as a linker still has a lot to deliver I guess.

I will try again tomorrow to double check if they really were taken into account.

@glad PCH solves for header bloat by making the front-end parse of the bloat a 1-time op. If you don’t have bloat, all you’re doing is adding work. Pch isn’t free, it just amortizes parser time in preprocessing and template instantiations.

The next file that’s compiled can just load a binary form of that first part of the parse and continue from there.

That’s much faster but not free, and the more you put into your pch, the more bloat that adds to every file that uses the pch.

If you had a project with 100 files that #include <string>, then a pch that #include’s it will save you about 1s of compile time.

But if instead you made it #include the whole of the std library, then you would probably add 5-10s to your build time. First, the pch would take longer to compile, but every name lookup the compiler has to do in your source requires it to check against the entire namespace of std.

With a “jumbo” or “unity” build, cmake merges all of your code into one source file and invokes the compiler once. If your code doesn’t overly rely on header-file shenanigans, that may be the best way to improve your build speed: one invocation of the compiler, one pass over everything.

— addenda

little stl-heavy project; ninja, i7 7th gen 2nd gen ssd, ddr4 2600

$ find src -name \*.cpp -or \*.h | wc -l
105
# .. repeat build 5 times, pick average
$ time cmake >/dev/null --build without-jumbo --clean-first
real      0m47.112s
# .. repeat build 5 times, pick average
$ time cmake >/dev/null --build with-jumbo --clean-first
real      0m5.601s

Makes a lot of sense,

I will try jumbo build tomorrow, my way of doing things isn’t really compatible with PCH apparently.
Everything is divided in optimized libs and I guess I see the work of the linker connecting my static libs more than anything else.

Will try that :+1: :+1:

I just tried using UNITY build and different batch sizes,

It ends up slower to build. (+5 seconds on average).

I will let it go that way for now, since I am not a good candidate for those kind of optimizations, but I will give it a second try at 100K LoC.

Thanks Oliver :+1:

1 Like