fetchcontent vs vcpkg, conan?

Newbie question… I’m starting some C++ projects of my own, and figuring out how to use CMake and related tools. I see people use vcpkg or conan for bringing in 3rd party libraries. Then I see FetchContent, which appears to do something similar.

Can someone explain the intent here? Is fetchcontent meant to handle the same things as vcpkg? Or is the CMake project moving in the direction of delegating this “dependency management” to other tools?

FetchContent is a way to manage dependencies without tools like vcpkg or conan. It really depends on what is best for your project that determines which is better.

Cc: @craig.scott

Third party package managers like vcpkg, conan, etc. are dedicated to providing dependencies. They typically have “recipes” for how to provide each dependency and they effectively allow a project to say “give me this” and leave it up to the package manager to take care of the details of how that’s done. That’s a gross over-simplification, but it’s close enough for the purpose of discussion here. Package managers are generally well-suited to publicly available dependencies, but some do support private dependencies as well. Users are typically responsible for making the package manager available on their system in some way.

FetchContent provides a way for a project to say “give me this, and if not provided by other means, use this method to get it”. With CMake 3.23 and earlier, it was more like “give me this and use this method to get it”, but CMake 3.24 added support for integration with find_package() and dependency providers, so it now gives you the ability to try a package manager first, and if that doesn’t supply it, fall back to the download-and-build-from-source details.

CMake itself doesn’t get involved in defining “recipes” for dependencies for either method. For package managers, the package managers provide the “how”. For FetchContent, the project provides the “how” (with CMake 3.24 or later, it’s more like the “how, if nothing else provides it”).

FetchContent is often a good fit within organisations where they want to bring together multiple internal projects under active development into one build. Developers don’t have to install an extra tool and it has direct support for working across multiple repositories at once. With FetchContent, a dependency is only added to the build if something asks for it, which means you can use CMake cache variables to turn features on and off, and the build will not ask for a dependency that isn’t used.

Package managers take an additional step to set up, which some people don’t like, but others don’t mind. They usually require you to adapt your workflow in some way to accommodate how they expect the build to be done. These adjustments can range from quite minor to rather disruptive. Package managers can be a good fit for bringing in mature dependencies, especially popular ones that many projects use. The maintainers of these tools often put in a lot of work to fix and patch dependencies to make them more reliable to use, which can save you a lot of work (if they get it right!). People often underestimate the value these tools provide by curating and managing how the dependencies are built. A common weakness of package managers is that they usually want you to define your dependencies up front, so you can end up pulling in dependencies you may not actually need.

My general advice would be to use FetchContent for internal dependencies if they are being actively developed. If package managers don’t present any major issues for your situation, prefer to use those to provide public dependencies. If you are inside an organisation and you have some internal projects that are relatively mature, consider whether a package manager might still be appropriate for making those available to others within the company. New features available from CMake 3.24 should help with transitioning projects from FetchContent-provided to package manager-provided once a project matures, should that be desired.

The following discussion may also be relevant (I intend to follow-up there with further comments soon): https://discourse.cmake.org/t/fetchcontent-in-dependency-management

1 Like

There is one more important difference:

With FetchContents, the package is build as a subproject.

So it is included in the compiler data base and may use the same compiler settings (i.e. -Wpedeantic …).
In this case, it is may also be checked with static analysers like clang-tidy by default!

1 Like

We disabled header set verification for things brought into the build via FetchContent. I’d be open to a change that did a similar thing by default for clang-tidy and other co-compile checkers.

Potentially interesting way forward:

Older thread with more manual workaround:

I came to the forum looking for information on exactly this scenario–I have several projects that are currently woven together using FetchContent, and I would like to provide at least SOME of those dependencies via conan. The rationale for this move is detecting and dealing with scenarios where multiple dependencies share a dependency (e.g., my project depends on A and B, and A & B both depend on boost, but maybe specify different versions).

Is the intent with dependency providers that, for any dependency where I might want to use conan, I should replace it with a find_package()? Then, in the dependency provider macro, I should use conan install? Or is it that I would run conan install prior to running cmake?

Both tools offer a lot of options, and I’m struggling to figure out which workflow and combination of options was in mind during design/development of dependency providers. Any clarity would be extremely helpful, including RTFM if it’s made it into the FM.

For completeness, I’m using conan 1.51.1 and cmake 3.24.3.

1 Like

Dependency providers can intercept calls to both find_package and FetchContent, so you shouldn’t have to change your existing CMake code to use this feature. Think of it like behavior injection.

What I would do is write a script such as ConanProvider.cmake, which would include content such as:

function (__conan_provide_dependency method packageName)

    if("${method}" STREQUAL "FIND_PACKAGE")

		# ARGN contains everything passed to find_package()

        # see if the package is already installed
		find_package ("${packageName}" ${ARGN} BYPASS_PROVIDER)

		if(NOT ${packageName}_FOUND)

			# run conan install or whatever...

			set (${packageName}_FOUND TRUE)



		# ARGN contains everything passed to FetchContent_Declare()

		# SOURCE_DIR and BINARY_DIR will be in the arguments list, possibly with default values
		cmake_parse_arguments (__CONAN_ARG "" "SOURCE_DIR;BINARY_DIR" "" ${ARGN})

        # see if the package is already installed
		find_package ("${packageName}" BYPASS_PROVIDER)

		if (NOT ${packageName}_FOUND)

			# run conan install or whatever...
            # if you don't actually download a copy of the source code, then don't set the source & binary dirs in the SetPopulated call below


		FetchContent_SetPopulated ("${packageName}"


cmake_language (SET_DEPENDENCY_PROVIDER __conan_provide_dependency

I would commit this file to your project’s source tree, but I would let its use be optional, instead of hard-coded into your project. The actual project’s CMakeLists.txt should just use find_package and FetchContent normally, and if a user wishes to override this behavior with conan, then they can invoke cmake with cmake -D CMAKE_PROJECT_TOP_LEVEL_INCLUDES=ConanProvider.cmake ...

Of course you can provide a CMake preset for this as well, so the invocation becomes much simpler, ie cmake --preset use_conan

Thanks for that clarification!

I was wondering how I could re-delegate to find_package() in the provider macro, and see that’s available with BYPASS_PROVIDER; I had missed that blurb in the manual (I found it now, of course). Am I reading the manual correctly that I would accomplish re-delegating to FetchContent by NOT calling FetchContent_SetPopulated() and thus it would then go fetch from git (in my case)?

Either option would be appropriate, depending on what you wanted. You are correct that the project itself should ideally just have a find_package() call and not particularly care how it is provided. The dependency provider should be where you control how it gets provided. If you run conan install prior to running CMake, then you are responsible for knowing all the dependencies that the project might need. If you run conan install inside the dependency provider, you will only end up installing the things you need, but you might call conan install a lot, including multiple times for the same thing, depending on how big and complex your project is. Both have their advantages and disadvantages.

A smarter dependency provider could potentially track what things it has already fetched and provided to speed up calls for the same thing. In the future, I hope to extend the dependency provider mechanism to provide a set of dependencies to the provider in one go, which would allow it to do parallel downloads, etc. if it had that capability. No ETA on that one though, I’m still recovering from the 3 years it took to get the current functionality implemented!

1 Like

This is kinda backwards to the way I envisaged dependency providers to work. The way I pictured things, the provider would try using conan install first and simply return without setting ${packageName}_FOUND to true if it was unable to provide the dependency. That would then fall back to the default built-in implementation of find_package() when the provider returned. But can understand how one might want to avoid running conan install if you already have the dependency populated. I’m just not sure that running find_package(... BYPASS_PROVIDER) is the right way to do that, since it will prioritise finding the dependency in system-provided locations ahead of conan-provided, which seems back-to-front to me.

1 Like

I think that BYPASS_PROVIDER is probably a good fit when combined with a prior conan install.

You point out disadvantages to running conan install beforehand, externally, but miss an important advantage: the conan dependency solver will detect conflicts of common transitive dependencies and report them/error out–long before you get weird compile behavior, weird link behavior, or weird runtime behavior.

I very much appreciate your followup explaining your vision, though, and your intended use-cases. It helps me understand how this mechanism can fit in to a broader solution in my environment.

Just spitballing, but rather than reproduce complex dependency solvers in cmake, you could maybe consider breaking up the steps for dependency resolution inside the provider mechanism. If it was broken up into a “declaration/accumulation” phase (kind of like FetchContent’s Declare and MakeAvailable do), then a provider could accumulate a list of dependencies across all subprojects, then delegate satisfaction to an external solver (like conan) all at once, thereby offering the external tool have the opportunity to fulfill that role.

Doing that with conan might be a little more complex, because conan gets driven from a DSL-ish python script, and offers a certain amount of richness/granularity that I’m unaware of cmake offering; a provider in this future framework I’m spitballing might actually end up having to fill in some blanks in that script and then invoke conan on that script.

It seems like cmake/conan could integrate/cooperate well, but for the most part they achieve a “so close but not quite” level of cooperation. I was hoping conan 2.0 would see more convergence, but after seeing what’s there I’m not entirely hopeful.

1 Like

You don’t need to do that. Just return without setting <packageName>_FOUND and CMake will automatically call the built-in implementation for you.

There are no plans I’m aware of to add any sort of complex dependency solver to CMake. The find_package() command has clear semantics about where it looks for a dependency. If the dependency it finds in turn depends on something else, it is the outer dependency’s responsibility to find the nested dependency. CMake doesn’t try to solve anything there, it’s just a clear DAG relationship and the dependencies that CMake selects from during a find_package() call are based on what is available on the search path it uses.

Similarly, FetchContent doesn’t try to solve dependency constraints. It uses a very simple “first to declare, wins” philosophy. It is the project’s responsibility to ensure that the things it asks for are compatible with each other.

For both find_package() and FetchContent, if a dependency provider is used, then it is the dependency provider’s responsibility to ensure the things it provides are compatible with each other. CMake doesn’t dictate how it should do that.

One of the big advantages of how both find_package() and FetchContent work is that they only pull in a dependency if the project needs it. You don’t always know in advance what the full set of dependencies will be. Some dependencies might depend on logic evaluated during the configure run. It is difficult to retain that advantage if you require the project to define its dependencies up front. Plenty of projects just aren’t structured that way. Certainly if you’re able to define them all up front, I have nothing against CMake being able to provide that list in one hit to a dependency provider. It can then handle that efficiently. This is the situation I was trying to describe in one of my previous replies. I wouldn’t want to require projects to operate that way though.

1 Like

While I think the ship has sailed on the kinds of power have been given to and wielded by C++ developers, if I were to start C++ packaging from scratch, I’d say that yes, you do need to declare dependencies up front. You can have optional dependencies, but they must be selected either by target platform or a uniform set of “enable thing X” requests. Basically, what Rust’s cargo does.

But, the ship has sailed and shoving every C++ project through such a filter would be a fool’s errand.

1 Like

There’s a difference between “define the details of all dependencies this project might need” and “go and get every dependency this project might need”. I have no problem with the former. I do have a problem with the latter. The former is all a dependency provider should need to resolve dependencies, but all too often I see people pushing for the latter, which is an unnecessarily restrictive constraint. It might simplify implementation, but that’s not a good enough reason to impose such a potentially harmful restriction on projects.

1 Like

Ah. In my case right now, I believe that would be sufficient. In a general case, though, it seems this will be dependent on how the project chose to integrate conan and cmake (i.e., the “generator” chosen, to use conan parlance). I’ll need to read all the docs on find_package several more times to understand all the semantics of the built-in implementation. Also, if one needs to do something in the provider AFTER the built-in implementation, this is the only choice.

I don’t think my suggestion requires the project to define its dependencies up-front. It actually seems like the functionality split would enable better delegation to external package managers, because it would enable the logic inside cmake to drive the determination of optional dependencies, while still allowing the package manager to drive resolution and conflict detection.

Splitting the functionality wouldn’t impose any requirements or take anything away; rather, it seems it would offer the ability to do more sophisticated things.

I don’t think cmake SHOULD include any kind of complex dependency management. I do think that dependency management–things like conflict detection across transitive dependencies, ABI compatibility, etc.–are important things, and cmake’s current model seems to impede delegation.

1 Like