Why fixtures are forced to be tests?

Hi, I am here to understand why you made fixtures doomed to be tests. I would also like to be pointed on a solution that avoid drawbacks I get now.

My usecase

I need to make my tests to touch a file automatically when they are successful. Since add_test dont support several COMMAND, I use fixtures to do so (with COMMAND cmake -E touch…).

Maybe I should use something else (if so what?), but for me, that’s exactly the purpose of a fixture.

Drawbacks

  1. The amount of report doubled because of the junk of the fixtures.
  2. The time to do a memory check almost doubled as well, since the startup time of -T memcheck of each test is so big, and the cmake -E… is being memchecked all the time after the real test.
  3. Probably other things that I did not spot yet

In my opinion, it’s conceptually wrong to force a fixture to be a separated test. Sure it can be useful sometimes, but forcing just loose flexibility. I would like to hear why this had been imposed, and if I have alternatives for my use-case.

Thanks

Hi, author of the fixtures feature here. Fixture setup and cleanup can also sometimes fail, just like other tests. To support various different use cases, it made sense for setup and cleanup steps to also be tests since most of the features available to tests can also be useful when implementing some types of fixture setup and cleanup steps. Implementing them as tests also meant that users didn’t have to learn yet another way of doing similar things, the same knowledge regarding using test properties, defining test cases, etc. transferred across to defining fixtures as well.

Regarding your particular use case, if you want a test to do more than one thing, then you can write a CMake script that carries out those steps and make that your test COMMAND. The down side is that your test will no longer be what memcheck analyses, it will instead analyse the process used to launch the script (i.e. CMake). That’s probably not what you want, but if you were happy with this path, you would define the test something like the following:

add_test(NAME CheckSomething COMMAND ${CMAKE_COMMAND} -P MyScript.cmake)

If the script needs information from the project, like the name or location of a target, you can either pass those as definitions to the script with -D options (use this method if you have to use generator expressions) or you can use configure_file() to substitute them at configure time to a generated script and run that in the test (only suitable if you don’t need to use generator expressions).

What is probably missing to better support the way you are using fixtures is a test property that can be set to say “don’t run memcheck on this test, just run it normally”. I’m not sure of the implications of that and how difficult it would be to implement though.

Hi Scott, thanks for your answer, and thanks for your work.

About your alternative

it will instead analyse the process used to launch the script (i.e. CMake)

Well it is okay for me if CMake does not spawn to much unexpected child processes (to not fail on other softwares memory problems, and eventually for performances). Then the MemoryCheckCommandOptions –trace-children=yes trick would still detect my errors.

The interface I dreamed of

For me, a fixture is a way to trigger some commands before/after a test. But the command should not be under scrutiny of the test tools (test reports, memcheck…). So a fixture would be like a named add_custom_command (it would be amazing if a fixture could use target/file level DEPEND the same way).

add_fixture(NAME my_fixture
COMMAND …)

And then, tests would use the same interface:
set_tests_properties(my_test1 PROPERTIES FIXTURES_REQUIRED my_fixture)
set_tests_properties(my_test2 PROPERTIES FIXTURES_SETUP my_fixture)
set_tests_properties(my_test3 PROPERTIES FIXTURES_CLEANUP my_fixture).
set_tests_properties(my_test3 PROPERTIES FIXTURES_CLEANUP my_test2).

And here notice that my_test3 accept another test as a fixture. Which would behave like in the current way.

If a regular fixture fail, this should be in the report and provoke the tests depending on it to not run. But if it don’t, it should not count as a passed test. In any case, a fixture should never be under “memcheck like” tools.

I know this may be hard to integrate into CMake code. But I just wanted to precise what I expected since I posted.

Thanks again for your help and work, I will go with your alternative and see what it gives.

I think adding a FIXTURE test property to mark a test as such to make it ignore things like MemCheck and Coverage gathering could be useful. It could even ignore things like REPEAT options (almost certainly for most fixture cleanups I can think of).

@ben.boeckel Tests that are fixture setup or cleanup steps already have FIXTURES_SETUP or FIXTURES_CLEANUP properties set. The idea I floated above is that we want an additional test property that can be used to bypass memcheck, maybe also another one to bypass repeats (better to keep these two things separate, they might be useful for things other than fixtures). We could use a policy to turn these on by default if either FIXTURES_SETUP or FIXTURES_CLEANUP is defined for a test and off otherwise. That would mean projects only need to update to a newer CMake policy level and get the sort of behavior James is essentially looking for (automatically exclude fixtures from repeats, memcheck, etc.).

But while the “memcheck like” problem is really annoying, the reporting issue is also. This complexify the report when you use a lot of fixtures like in my use case, and makes the final count far to be representative of the real amount of tests done and failed. An update on this should also prevent the fact that fixtures are reported the same way as a failing/passing test.

I dont think adding FIXTURES_DONT_REPEAT, FIXTURES_DONT_MEM_TEST, FIXTURES_DONT_REPORT_AS_TEST is the way to go, because you will eventually bump into more problems than memcheck, reporting, repeating, coverage… The root of the problem is to always consider a fixture as something to be monitored as a test by Ctest. When a lot (Maybe I am not objective) of use-cases are just executing a command that we dont want to test.

If you guys want to fix it all using a property, it would be FIXTURE_DONT_TEST. Which means CTest would just run the fixture outside of his usual monitoring test environment, check and report the failure (not as a failed test), mark the tests depending on the fixture to not run. But then that’s weird as an interface, because you use add_test followed by a DONT_TEST property.

Again, thanks for your answers and work. I am just adding my two cents as an user that would like a different interface, without considering the work it would need.

Ah, right.

Of course, an alternative solution could be to support multiple commands in a test (as an && command chain) to avoid the abuse of fixtures in this case (if I’m reading it properly). How to apply memcheck to these command chains might be another thing that would need thought about these.

I would be okay with this solution to because it would fix the reporting, which is the most annoying problem for me. However the performance of mem-check may be degraded in the same way than actually with fixtures. I dont know from where comes the huge different of performance when using the -t memcheck even for an empty test, but if it come from the cost of tracing a new process with memtest, then whatever you do it should be equally slow.

I think it’s cleaner to expose an interface that allow the possibility to let fixtures to not be monitored as a test, but just as a command to be executed. This would resolve all of the aforementioned problems and maybe some others that I did not bump into yet.