I am trying to streamline the build process of my application. The idea is to execute a build pipeline of several docker images, each compiling either dependencies or parts of a big C++/CUDA codebase.
The problem I face is that it seems not possible to compile CUDA code inside a docker windows container. As I traced the roots of the failure, found that CMake’s FindCUDAToolkit tries to compile a sample
.cu file to see if the
nvcc.exe works properly (in CMakeDetermineCUDACompiler.cmake:262, and then CMakeDetermineCompilerId.cmake:34), but then tries to run it.
Since it is not possible to run a CUDA-enabled executable inside a docker container; the whole process fails and CMake reports that it has not found any suitable CUDA compiler.
While this makes sense for some scenarios, it doesn’t for mine. I just need to compile my libraries and executables, pack them and export them to some FTP server. I have no intention to execute any of them inside the container. Any workarounds? All my buildsystem files look like this one:
cmake_minimum_required(VERSION 3.16.0) project(sample VERSION 0.1.3 LANGUAGES CXX CUDA) find_package(CUDA REQUIRED) add_library(libsample a.h a.cpp b.cuh b.cu)