How to properly import CMake files/modules?

I created a collection of CMake functions and macros that will be rolled out on our development machines. Our projects should just need to include the collection’s main .cmake file to make use of it.

I had in mind to somehow propagate the collection’s location on the development machines and let the projects call include() to import this functionality, as its documentation says

Load and run CMake code from a file or module.

But it seems like I cannot make include() search specific locations other than the current directory, CMake’s module directory and what CMAKE_MODULE_PATH points to (not in this order). The latter seems like the one to use but it is not available as environment variable (like CMAKE_PREFIX_PATH for find_package()), so the user would need to specify it on the command line for the CMake call. But the user/developer should not need to care about this path.

Is it possible to make CMake aware of such external modules on a machine?
Is this anyways a use case for the include() call or should I consider to treat my collection as a package? My understanding is that packages provide build dependencies and CMake code should be included.
Or should I consider to deploy my scripts into the CMake installation’s module directory? This doesn’t seem right to me as it makes them look like a part of the CMake distribution.

Atm the closest solution I have would be to retrieve the path from a dedicated environment variable right before the include() call. But I’m wondering if I could even reach my idea.

The way that I would do this is:

find_package(NiftyFunctions REQUIRED) # adds its module directory to `CMAKE_MODULE_PATH`
include(NiftyAPIFooBar) # searches in `CMAKE_MODULE_PATH` entries for `NiftyAPIFooBar.cmake`

NiftyFunctionsConfig.cmake would have something like:

list(INSERT CMAKE_MODULE_PATH 0
  "${CMAKE_CURRENT_LIST_DIR}/APIs")

in it (not sure what else might make sense in there).

One of the drawbacks to rolling out your CMake files to each developer machine and then loading them with include() is that those CMake files sit outside of version control. If you change those CMake files, your project loses traceability, meaning that you cannot reliably build an earlier version of the project because these CMake file dependencies are not traceable.

Let me suggest a different alternative. Consider whether you can have the project download those CMake files on-demand using the FetchContent module instead. You put the CMake files in their own git repository or whatever version control system you prefer to use. In the project, you specify exactly the git hash of the commit to retrieve in the FetchContent_Declare() command. Now the project has full traceability of the CMake files, since the git hash is included in the project’s own sources. As an added benefit, you no longer need any separate infrastructure to roll out your CMake files to developer machines ahead of time.

  • If needed, you can use FetchContent before the first project() call to retrieve part or all of the toolchain being used to build the project.
  • FetchContent avoids communicating with the remote end if it knows it doesn’t need to. If you use a git hash for the GIT_TAG, then it can tell if it already has the commit it needs and will use it without needing to do a git fetch. Developers can therefore work offline after the first run has downloaded the required commit from your repo.
  • You can make including the CMake files part of the fetched repo. The FetchContent_MakeAvailable() command will call add_subdirectory() on the fetched repo’s source directory if there is a CMakeLists.txt file at the top level. You can put whatever commands you want in that CMakeLists.txt file, so you have an injection point to pull in CMake files with include() to define commands, etc.

I recommend you take a look at the FetchContent module documentation to get an idea of how it works and what you can do with it.

1 Like

Compared to a git submodule, this has the downside that it makes it harder (not impossible) to publish or archive e.g. as a tarball.

Upside is that it can be used with any data source.

FetchContent sounds very interesting - I didn’t know about that. I’ll give it a try.

Nevertheless, also the answers of @ben.boeckel and @hsattler provide some interesting input. While I was thinking about using either include() or find_package(), I didn’t thought about using both. I like the fact that it would use the rolled out version of the script collection (it’s a managed environment), but it contradicts your point about traceability (which I definitely need to consider).

To use a git submodule also seems to be a good option (as we are using git). Probably because we’re trying get rid of submodules as build dependencies, I didn’t had them in mind for this purpose. But it would fit well here. For sure, I will use it in another place for a similar problem.

Therefor, thanks a lot to all of you!