Build (and not run) CUDA app on docker windows image

I am trying to streamline the build process of my application. The idea is to execute a build pipeline of several docker images, each compiling either dependencies or parts of a big C++/CUDA codebase.

The problem I face is that it seems not possible to compile CUDA code inside a docker windows container. As I traced the roots of the failure, found that CMake’s FindCUDAToolkit tries to compile a sample .cu file to see if the nvcc.exe works properly (in CMakeDetermineCUDACompiler.cmake:262, and then CMakeDetermineCompilerId.cmake:34), but then tries to run it.

Since it is not possible to run a CUDA-enabled executable inside a docker container; the whole process fails and CMake reports that it has not found any suitable CUDA compiler.

While this makes sense for some scenarios, it doesn’t for mine. I just need to compile my libraries and executables, pack them and export them to some FTP server. I have no intention to execute any of them inside the container. Any workarounds? All my buildsystem files look like this one:

cmake_minimum_required(VERSION 3.16.0)
project(sample VERSION 0.1.3 LANGUAGES CXX CUDA)
find_package(CUDA REQUIRED)
add_library(libsample a.h a.cpp b.cuh b.cu)

@robert.maynard

Compiler detection in CMake across all languages ( C++, C, CUDA ) is the expectation that it is possible to run a simple executable in the current environment to extract compiler vendor name, and other information.

You can try to setup a toolchain file that specifies all the required information ( compiler vendor, etc ) that the executable outputs, and set CMAKE_TRY_COMPILE_TARGET_TYPE to STATIC_LIBRARY.

I build CUDA in Docker where there is no GPU. You can do this without being able to run anything.

Is there a reason why you still have find_package(CUDA)? Now that CUDA is a first-class language, you don’t need it.

I don’t know if there is a nuance related to doing this on Windows; my Docker hosts are all Linux; Windows support for Docker is potentially iffy.

Indeed. I performed some experiments to confirm that nvcc works perfectly fine and generates executable. The generated executable also works fine (in case there is no actual usage of the GPU).

Yes. I need thrust include dirs.

I think there is either a problem with a msbuild/cuda integration or CMake on docker. Because MSVC compiler and NVCC are working fine in standalone mode. Thoug CMake can’t find CUDA compiler. I tried setting CMAKE_CUDA_HOST_COMPILER also didn’t work. I whish CMake would tell us why!

The output of cmake . is:

-- Building for: Visual Studio 16 2019
-- Selecting Windows SDK version 10.0.18362.0 to target Windows 10.0.19042.
-- The CXX compiler identification is MSVC 19.28.29915.0
CMake Error at C:/Program Files/cmake/cmake-3.19.2-win64-x64/share/cmake-3.19/Modules/CMakeDetermineCompilerId.cmake:412 (message):
  No CUDA toolset found.
Call Stack (most recent call first):
  C:/Program Files/cmake/cmake-3.19.2-win64-x64/share/cmake-3.19/Modules/CMakeDetermineCompilerId.cmake:34 (CMAKE_DETERMINE_COMPILER_ID_BUILD)
  C:/Program Files/cmake/cmake-3.19.2-win64-x64/share/cmake-3.19/Modules/CMakeDetermineCUDACompiler.cmake:262 (CMAKE_DETERMINE_COMPILER_ID)
  CMakeLists.txt:2 (project)


-- Configuring incomplete, errors occurred!

I can confirm that nvcc works outside CMake. I can compile CMakeCUDACompilerId.cu and it return 0 after execution. But CMake still refuses to take CUDA as a valid language. It looks for a CUDA toolset apparently (output below). All env vars (CUDA_PATH, etc.) look good. NVCC and MSVC work fine.

-- Building for: Visual Studio 16 2019
-- Selecting Windows SDK version 10.0.18362.0 to target Windows 10.0.19042.
-- The CXX compiler identification is MSVC 19.28.29915.0
CMake Error at C:/Program Files/cmake/cmake-3.19.2-win64-x64/share/cmake-3.19/Modules/CMakeDetermineCompilerId.cmake:412 (message):
  No CUDA toolset found.
Call Stack (most recent call first):
  C:/Program Files/cmake/cmake-3.19.2-win64-x64/share/cmake-3.19/Modules/CMakeDetermineCompilerId.cmake:34 (CMAKE_DETERMINE_COMPILER_ID_BUILD)
  C:/Program Files/cmake/cmake-3.19.2-win64-x64/share/cmake-3.19/Modules/CMakeDetermineCUDACompiler.cmake:262 (CMAKE_DETERMINE_COMPILER_ID)
  CMakeLists.txt:2 (project)

So, It seems:

  1. Visual Studio Integration from CUDA is necessary for CMake. (CMakeDetermineCompilerId.cmake:427)
  2. Only Visual Studio is supported. Microsoft Build Tools has different installation paths. I assume even the CUDA installer does not support Microsoft Build Tools (I can’t find .props files accordingly)

Can you confirm?

Assuming you are running a modern (>10) version of CUDA, the CUDA include dirs are added when nvcc is invoked, so I assume you are talking about pure C++ code. I handle this with

target_include_directories( ${myTarget} SYSTEM PRIVATE
      ${CMAKE_CUDA_TOOLKIT_INCLUDE_DIRECTORIES}
)

(or, for a more surgical approach,

target_include_directories( ${myTarget} SYSTEM PRIVATE
      $<$<COMPILE_LANGUAGE:CXX>:${CMAKE_CUDA_TOOLKIT_INCLUDE_DIRECTORIES}>
)

(at least, this works for Linux)
I also use a CUDA base layer for my docker image, nvidia/cuda:11.0-devel-centos7. Unfortunately, probably due to Microsoft’s licensing of Window images, I don’t see any Windows versions.

Hope that helps.

For the Visual Studio Generator, CMake relies on the MSBuild extensions provided by the CUDA Toolkit. I don’t know if they support only a Microsoft Build Tools installation.

The Ninja and Makefile generator on windows don’t rely on the MSBuild extensions, and should work with your current setup