General pattern for making libraries available to other projects?

So far my projects have been rather small and simple. Haven’t had much use of reusing parts nor have I shared my libraries.

However, my projects have grown and there are some I’m considering making available for general consumption. But I’m a little lost of what the common workflows are.

So far I’ve used git submodules for own own libs, which I then consumed via add_subdirectory.

And for third party libraries that has no out of the box FindModule or package I’ve written my own FindModule and copied that to the projects that needs it.

But, how does one generally make a library available to other projects on your system without consuming as a sub-module?

I suspect that install might be key here, but I’m not fully understanding it…

If I make a library Foo, and set up the project to use install for the binaries and headers, then on Windows it appear to copy to Program Files? Is that a common thing people do?

And how do you deal with different versions of a library?

Or, do people create a dedicated directory for each project and then set the install destination of each required lib to that directory? (Assuming then that the project adds it to CMAKE_PREFIX_PATH?)

Also, if a dedicated installation directory is used, it seems it quickly get tedious to maintain as the number of dependencies grow.

I’m struggling to figure out what typical workflow for this would be. I’m finding a lot of information about different ways you can do things. But I find it hard to find up to date recommended info. (I have Professional CMake by Craig Scott - which is a great help - but this topic is still fuzzy to me.)

A more flexible approach might be to use FetchContent instead, but both are valid.

An alternative would be to create a repo which has just your CMake helpers and then have each project that needs it pull that repo in with FetchContent. That way you only need to maintain the code in one place. I’ve used this approach in production projects and it was very helpful in reigning in rampant cut-n-paste across projects and reducing the maintenance burden that would otherwise have come with that.

Yes, defining install() rules is the preferred way to go if you can. Use install(TARGETS) and make sure you also include EXPORT options in those commands. That associates the targets with an export set. Then do install(EXPORT) to add the export set to the installed artefacts. For an example in my book, see Section 25.3: Installing Exports.

The last piece is to install a <projName>Config.cmake file that pulls in those installed export set files. It’s a bit more involved than one would like, but hopefully it becomes a bit clearer with an example. I’m sure there are more accessible ones around, but you can take a look at the bonus slides at the end of my CppCon2019 talk. They show the sort of structure I’m describing here. You can find that talk and the slides here:

It installs to the location specified by CMAKE_INSTALL_PREFIX. That cache variable should not be set by the project, it is intended as a user control. If you want to install directly from your build (not something I’d typically recommend - create packages instead), you can set CMAKE_INSTALL_PREFIX to where you want to install to.

Install them to different places. You shouldn’t try to overlay different versions of the same package into the same place because some files will inevitably conflict. Some people choose to use versioned subdirectories below some common point. While that solves the conflict issue, it is less convenient for users when they update your package in-place. For example, say the user installs your package to C:\Program Files\Foo-1.2.3. They set up a shortcut to an executable in that directory. Then they update the package to version 1.3.0. Either the name of the install directory changes (e.g. to C:\Program Files\Foo-1.3.0), in which case their shortcut breaks, or they re-use the existing location which now has a misleading version in its name. For this reason, I typically recommend not including a version number in the install location. The user can always override that if they want to install multiple versions simultaneously. The key here is that making the install location version-specific becomes an opt-in choice for the end user.

There’s a fair bit of variety in the different ways end users want to use their systems. As a project maintainer, part of your job is to try to avoid locking them out of choices. Aim for “it just works” behavior by default, provide the ability to customise for those who want to do something else beyond just install a version and largely have it work for the main use cases. Installing to the default location on Windows is probably going to mean users can set CMAKE_PREFIX_PATH to C:\Program Files and your project will be found automatically (maybe it’s already part of the default search path even, I don’t recall off the top of my head). If you are providing packages instead of relying on users installing directly from a build tree, then users can choose where to unpack that package and they can also then point CMAKE_PREFIX_PATH at that base install location.

Not sure if I’ve addressed all your questions, but this is a complex area, in part because there are just so many different ways things can be done (every platform is different and there are different conventions and policies even within an individual platform).

Hi @thomthom,

Briefly about my case…

I have tons of CMake based projects that depend on each other. The very first one is the resulable CMake modules which is used by all others. All projects provide CMake configuration files, so it’s very easy to use find_package and find whatever you need. This also gives flexibility to deal w/ multiple versions. For third-parties w/o CMake config files there are FindXXX modules in the separate project (the one mentioned above :).

We have several target platforms (e.g., Windows, Ubuntu 16.04 to 20.04, CentOS 6, 7, 8) and for each of 'em we build a native package (NuGet, DEB, RPM) using CPack. Our CI deploys releases to the internal repositories (NuGet, RPM, DEB). All what developer need is a package manager to install required packages (apt-get, yum, or nuget) to work on his project and find_package will do the rest.

Also we have Docker images with pre-installed third parties and pre-configured internal RPM/DEB repositories to be used in CI or development. For Windows we use vagrant + VirtualBox, also w/ pre-installed third parties and pre-configured NuGet repositories.

Thank you very much for your feedback. It helps. It’s clear to me that I’m struggling with this because I’ve not worked with this workflow and I’m not familiar how people might work with installing CMake packages. Most examples doesn’t explain how it might be used, so that has had me stomped in getting to grips with what is needed for a good package - or even how I would use them myself in a sane way without consuming as sub-modules/FetchContent.

Are *Config.cmake files typically always generated at install time?

So while the default on Windows is Program Files, a typical workflow might be that a develop have a dedicated folder to libraries. But this location is shared across multiple projects?
Or even have multiple locations of CMAKE_INSTALL_PREFIX depending on project type?

Just to reiterate on this point, would you for instance create a dedicated location for a set of somewhat related projects and install libraries they rely on there? Because most likely it would make sense for them to use the same version?

Ok - there is something there I haven’t understood. What’s meant by installing directly from build? Installing (copying) just binaries and headers to a location without a CMake config file to accompany it?

Yes, Program Files is the default location on Windows. Which I’m personally not too keen on. I prefer applications there, not libraries. And I’m somewhat paranoid about what kind of Windows magic happens in that area of the file system.
But that is my preference, and I’m fully on board with letting the user have the control over this. I guess this is one of the reasons I was curious to understand how a workflow might be on Windows and if people in general just installed to Program Files.

I think this is another point that has lead me to be stomped. There is a library I use for many of my projects that have somewhat frequent major and minor updates. Which one to use for each project depend on the feature set and compatibility target of each project. (This is related to extension projects where the library is the host application’s libraries. I seems to be to be somewhat different scenario that what you described, because the lib represent the application consuming my extension.)
In my mind I would like to have all these versions easily available via find_package, but control versioning via the version argument of find_package. Would this be going against the grain? If not, what would a workflow for consuming this library be?

Another thing, this particular library, is distrobuted as headers and binaries only. No way for anyone to build themselves. But if I want to make that easily accessible to my project, I can write FindModules - that’s all fine and all. But, copying the same FindModule to each project is tedious. If I wanted to make it into a package, would the process be to make a CMakeLists.txt file that takes the location of the original lib, set up imported targets, and then install command which generates *Config.cmake files?

Yup - complex indeed. And I’m grateful for you taking the time to respond in such length. My conclusion to the main source of my confusion is that it’s not the lack of examples on how to set up installation, but a lack of examples in context. While there might be a lot, I wish there was some key examples in the CMake docs/examples that gave some context on how it was used.

FindModules or include modules? Or both?

This is interesting. At my office we currently use Conan. (No CMake - yet, but are researching that.) For my personal projects I’ve not made use of package managers yet. (I think mainly because of need for a private repository.)
But if I would solve that (without having to set up my own server, like with Conan) then a lot of this would probably feel a lot smoother to me. And Make more sense.

@thomthom
FindModules or include modules? Or both?

This is an “ordinal” CMake project w/ it’s own CMakeLists.txt. The project consists of various CMake modules, yes including some FindSmth.cmake for third-parties w/o CMake support. The idea is similar to KDE’s ECM project. The other projects depend on it and look for it via the “ordinal” find_package() call in order to use “exported” modules and helper functions from it in their CMakeLists.txt.

… I think mainly because of need for a private repository

We use JFrog Artifactory (cloud) to manage all the types of repositories (including Vagrant and Docker to store CI/developer’s images).

The two typical workflows we have:

  1. build everything from scratch – i.e. when the developer performs cd build && cmake -DCMAKE_INSTALL_PREFIX=... .. && cmake --build . ... && cmake --install ... for every project he needs.
  2. In this scenario, he just reuses prebuilt binaries of dependencies and got 'em from the Artifactory repos. For Linux targets everything is easy-peasy – just run yum install <deps> or apt-get install <deps and run CMake to configure the build he wants. For Windows, NuGet can be configured differently:
    2.1 to install all packages to the selected folder (say, c:\dependencies) and every find_package in the projects has PATHS "${MY_DEPENDENCIES}" "$ENV{MY_DEPENDENCIES}" hints. So, whenever you want, that “common” location could be altered if -DMY_DEPENDENCIES=... passed to CMake.
    2.2 NuGet installs dependencies to build/.nuget (i.e. directly to the build directory) and find_package also has PATHS hints for that.
    As for me, I prefer the 2nd option :slight_smile: and use a “common” location for third-partis only.

BTW, NuGet could be configured to get packages from the “local” repository – i.e. just an ordinal folder w/ *.nupkg in it :slight_smile:

One of the reasons for that workflow is to avoid any network operations during configure and build (that also excludes one more randomly triggered build failure point :).
When you have your packages installed and managed via the real package manager (not really related to NuGet however :), one can go offline to build, develop and fix whatever he needs.

That is why we don’t need (and use) fetch or external project modules, Conan, and other “package manager emulators” ^W^W ^W :slight_smile: tools that download smth at configure/build steps…

They might exist or be generated at configure (or build) time, but their primary workflow is to be installed at install time. Most of the time, the contents of the *Config.cmake file will be fixed. Sometimes you might want to substitute in a few things, which can typically be done using configure_file() at configure time. The resultant file is then installed using install(FILES).

Users might install each project to their own area, or they may install them to a common install prefix. Your aim should be to avoid doing anything that would preclude either use case if you can. The user should also be able to choose pretty much any install base point they want, so try to make your installed project make no assumptions about where it is installed to. That’s a fairly fluffy piece of advice, but only you know what sort of things your project is trying to do, so you will need to look at whether anything is making such assumptions yourself.

Personally, as much as possible, I try to avoid installing dependencies to a central place where I can. I prefer to have them downloaded/built in my build tree. That ensures I am working within an isolated area that won’t affect any other project I’m working on elsewhere in my filesystem (I frequently work across multiple unrelated projects). Sometimes you have projects that have different requirements for the same dependency, so you may not be able to satisfy both requirements with a centrally installed package. This is the sort of thing that FetchContent routinely solves for me.

On the other hand, some people prefer to use a common set of dependencies and use a central place for them. Or they might leave the management of them up to a package manager of their choice. If that satisfies your use case, then there’s nothing wrong with that, you just need to be aware of the pros and cons.

I meant something like make install. That might install a config file as part of the install (if you told it to with a relevant install(FILES) command). Also be aware that an install might do more than just copy binaries. It may also modify those binaries to do things like rewrite RPATH/RUNPATH entries (not on Windows though).

Yes, you can request a specific version, for example:

find_package(somePackage 1.2.3 EXACT REQUIRED)

Be careful with that though, it severely limits the set of packages your project can be used with. It’s not all that common that a project needs an exact version. You can direct find_package() to use a specific location if you know which package you want to use. Read the docs for the find_package() command and look for mentions of <PackageName>_ROOT as one way to do that.

Prefer to not provide a Find module. Provide a *Config.cmake file instead. Install that *Config.cmake file along with the rest of your package. Consumers can then use find_package() to bring it into their build. The only thing they might need to do is set CMAKE_PREFIX_PATH or a <PackageName>_ROOT to help find_package() locate it if it isn’t installed to one of the locations that find_package() searches by default.

You don’t need to create the imported targets yourself. That’s what install(TARGETS) does for you, but you want to use the EXPORT keyword with that and also do an install(EXPORT). Your *Config.cmake file would then include() the file that install(EXPORT) creates. That should be straightforward if you install the *Config.cmake file and the file that install(EXPORT) creates to the same directory (this is the typical practice).

1 Like