But while the “memcheck like” problem is really annoying, the reporting issue is also. This complexify the report when you use a lot of fixtures like in my use case, and makes the final count far to be representative of the real amount of tests done and failed. An update on this should also prevent the fact that fixtures are reported the same way as a failing/passing test.
I dont think adding FIXTURES_DONT_REPEAT
, FIXTURES_DONT_MEM_TEST
, FIXTURES_DONT_REPORT_AS_TEST
is the way to go, because you will eventually bump into more problems than memcheck, reporting, repeating, coverage… The root of the problem is to always consider a fixture as something to be monitored as a test by Ctest. When a lot (Maybe I am not objective) of use-cases are just executing a command that we dont want to test.
If you guys want to fix it all using a property, it would be FIXTURE_DONT_TEST. Which means CTest would just run the fixture outside of his usual monitoring test environment, check and report the failure (not as a failed test), mark the tests depending on the fixture to not run. But then that’s weird as an interface, because you use add_test followed by a DONT_TEST property.
Again, thanks for your answers and work. I am just adding my two cents as an user that would like a different interface, without considering the work it would need.