CTest skip tests when RESOURCE_GROUPS is not available

Here’s a reduced example. The CMakeLists.txt file is (one executable which I’m running as three tests using the same resource groups):

cmake_minimum_required(VERSION 3.21)
project(foo)

set(CMAKE_CXX_STANDARD 23)
add_executable(wat wat.cxx)

enable_testing()
add_test(NAME a COMMAND wat)
set_property(TEST a PROPERTY RESOURCE_GROUPS
  "widget:1")
add_test(NAME b COMMAND wat)
set_property(TEST b PROPERTY RESOURCE_GROUPS
  "widget:1")
add_test(NAME c COMMAND wat)
set_property(TEST c PROPERTY RESOURCE_GROUPS
  "widget:1")

where wat.cxx just sleeps for a couple seconds then logs all the environment variables with CTEST in them, then fails:

#include <iostream>
#include <stdlib.h>
#include <string>
#include <chrono>
#include <thread>

int main(int, char**, char** env) {
    std::this_thread::sleep_for(std::chrono::seconds(2));

    for (char** e = env; *e != nullptr; ++e) {
        std::string s = *e;
        if (s.contains("CTEST")) {
            std::cout << s << '\n';
        }
    }
    return 1;
}

If I do:

$ ctest -j5

All three are run in parallel.

If I add a spec file:

{
  "version": {
    "major": 1,
    "minor": 0
  },
  "local": [
    {
      "widget": [
          { "id": "aaa" },
          { "id": "bbb" }
      ]
    }
  ]
}

Then this still runs everything in parallel (I guess the command-line argument parser doesn’t like =)?

$ ctest -j5 --resource-spec-file=spec.json --output-on-failure
Test project /home/brevzin/sandbox/ctest-parallel/build
    Start 1: a
    Start 2: b
    Start 3: c
1/3 Test #1: a ................................***Failed    2.00 sec
CTEST_INTERACTIVE_DEBUG_MODE=1

2/3 Test #2: b ................................***Failed    2.00 sec
CTEST_INTERACTIVE_DEBUG_MODE=1

3/3 Test #3: c ................................***Failed    2.00 sec
CTEST_INTERACTIVE_DEBUG_MODE=1


0% tests passed, 3 tests failed out of 3

Total Test time (real) =   2.00 sec

The following tests FAILED:
          1 - a (Failed)
          2 - b (Failed)
          3 - c (Failed)
Errors while running CTest

But this one does run them two at a time, as desired:

$ ctest -j5 --resource-spec-file spec.json
Test project /home/brevzin/sandbox/ctest-parallel/build
    Start 1: a
    Start 2: b
1/3 Test #1: a ................................***Failed    2.00 sec
CTEST_INTERACTIVE_DEBUG_MODE=1
CTEST_RESOURCE_GROUP_COUNT=1
CTEST_RESOURCE_GROUP_0_WIDGET=id:aaa,slots:1
CTEST_RESOURCE_GROUP_0=widget

    Start 3: c
2/3 Test #2: b ................................***Failed    2.00 sec
CTEST_INTERACTIVE_DEBUG_MODE=1
CTEST_RESOURCE_GROUP_COUNT=1
CTEST_RESOURCE_GROUP_0_WIDGET=id:bbb,slots:1
CTEST_RESOURCE_GROUP_0=widget

3/3 Test #3: c ................................***Failed    2.00 sec
CTEST_INTERACTIVE_DEBUG_MODE=1
CTEST_RESOURCE_GROUP_COUNT=1
CTEST_RESOURCE_GROUP_0_WIDGET=id:aaa,slots:1
CTEST_RESOURCE_GROUP_0=widget

0% tests passed, 3 tests failed out of 3

Total Test time (real) =   4.00 sec

The following tests FAILED:
          1 - a (Failed)
          2 - b (Failed)
          3 - c (Failed)
Errors while running CTest

Cool. The issue, though, is what if I have no resources. If I change the spec key from widget to gadget, for instance, then this happens:

$ ctest -j5 --output-on-failure --resource-spec-file spec.json
Test project /home/brevzin/sandbox/ctest-parallel/build
                                        Start 1: a
Insufficient resources for test a:

  Test requested resources of type 'widget' which does not exist

Resource spec file:

  spec.json
1/3 Test #1: a ................................***Not Run   0.00 sec
    Start 2: b
Insufficient resources for test b:

  Test requested resources of type 'widget' which does not exist

Resource spec file:

  spec.json
2/3 Test #2: b ................................***Not Run   0.00 sec
    Start 3: c
Insufficient resources for test c:

  Test requested resources of type 'widget' which does not exist

Resource spec file:

  spec.json
3/3 Test #3: c ................................***Not Run   0.00 sec

0% tests passed, 3 tests failed out of 3

Total Test time (real) =   0.00 sec

The following tests FAILED:
          1 - a (Not Run)
          2 - b (Not Run)
          3 - c (Not Run)
Errors while running CTest

It’s good that the tests don’t run. But is it possible to consider these as being not failures? That is: skipped, not failed?

Think of these as optional tests: if I have a widget available, then go ahead and run the test (possibly failing). If I don’t have a widget available, just don’t run the test. Lack of widget isn’t an error.

When manually running the test, I can return skipped if there’s no available widget - but I can’t find a way to do that from ctest using this RESOURCE_GROUPS mechanism.

CTest’s command line parser needs a polish like CMake got recently. Please file an issue for this.

For the behavior of the resource groups: @brad.king @kyle.edwards

my guess is that a , b , and c are all assigned the same resource group, “widget:1”. When you run CTest with --resource-spec-file spec.json , it will only run two tests at a time, because the JSON file specifies that there are two “widget” resources available with one slot each. When you run CTest without the --resource-spec-file flag, all three tests are run in parallel, because there are no resource constraints specified.

To me, the issue is the behavior difference between “no resources specified at all” and “resources specified, but specific resource not mentioned”.

Right. The issue is that the tests need some resource, this resource isn’t available, so the test isn’t run. That all makes perfect sense.

My question is if it’s possible to mark such a case as skipped rather than failed.