I’ve seen posts mentioning building on MacOS ~12+ no longer works using conventional paths to system libraries, e.g., /usr/lib/libm.dylib, because those libraires are now hidden for security. Instead linking needs to be done against the corresponding *.tbd file, e.g., /Library/Developer/CommandLineTools/SDKs/MacOSX..sdk/usr/lib/libm.tbd or maybe add something like -L/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib -lSystem on the link line.
One of the projects I support has built fine on both linux and older versions of MacOS using cmake where that library has always been referenced by /usr/lib/libm.so and /usr/lib/libm.dylib respectively. However building that project (x86_64 binaries) on MacOS 12.x fails because cmake (version 3.24.1) when automatically adding -lm to the link line still apparently expands it to the now obsolete /usr/lib/libm.dylib path (cannot be found).
When I look at the cmake generated files build.make, link.txt, and *.pro, I see many references to -lm and even hardcoded instances of /usr/lib/libm.dylib that break linking. While I can modify the linker attributes per target it seems I can only add paramaters, e.g., -L/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib -lSystem, which does not help because cmake is always adding -lm before that. An entry for -lm does not show up when I extract the linker flags for a target named in target_link_libraries() and there are too many targets with those libm dependenices automatically generated to individually modify each one.
Maybe I am missing something, but I do not see a I do not see a global cmake variable that allows me to remove or override inferred depedenices like -lm added to the link line or otherwise force the expansion of -lm to something I can define based upon the version of MacOS. It seems entries in link.txt are in fact correct for other system libraries, e.g., I see
We build using some 3rd party libraries and the cmake output from --trace-expand shows some of those libs list -lm and /usr/lib/libm.dylib as an INTERFACE_LINK_LIBRARIES dependency in the Targets.cmake included in the build/install of the 3rd party libs. That is because they were built on MacOS 10.15 (Catalina) before the link paths changed.
Apple says you should build binaries on the oldest version of the OS you are going to support (Catalina for us), and then test binary backwards compatibility works on newer versions of MacOS. And it does work to run those Catalina binaries on BigSur and Monterey. Similarly we expect developers to be able to check out our source code into a sandbox on MacOS BigSur and/or Monterey and then build it using the pre-built 3rd party library distributions compiled on Catalina. Or in keeping with binary backwards compatibility, we do not expect developers to rebuild all those 3rd party libraries from scratch on BigSur or Monterey in order to change references in cmake generated files from /usr/lib/libm.dylib to /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/libm.tbd
I tried following this post (c++ - change library dependencies in CMake build - Stack Overflow) to list INTERFACE_LINK_LIBRARIES for one target with the error finding /usr/lib/libm.dylib, It lists the all the 3rd party libraries but not the (system libraries they in turn depend on. Even if that worked, it seems to me I would have to change the path for every system library in every Targets.cmake file in order for it to be interpreted correctly on BigSur and Monterey.
This almost seems like a case of cross-compilation gone bad, except that instead of compiling for a different target platform, it’s only the system library paths that need to change across different OS releases from the same vendor. Perhaps cmake could employ some kind of augmented find function for system library dependencies on MacOS such that if /usr/lib//dylib cannot be found then the next hint to search for is /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/.tbd. But I do not see how to do write that.
Does using CMAKE_OSX_DEPLOYMENT_TARGET=10.15 suffice? The problem is that these older macOS versions EOL pretty rapidly and newer Xcode versions are barred from them. I think just using a newer SDK on the older release (and the deployment target setting) may be sufficient as well.
There’s not really a way to edit these paths reliably; they’re just strings to CMake. The more proper way would be for them all to be targets and to be re-found at use time so that they can adapt to whatever the ambient environment is as necessary.
I should have mentioned we already use -DCMAKE_OSX_DEPLOYMENT_TARGET option to cmake. Building with or w/o does not seem to change looking for the old style paths to .dylib files on newer versions of MacOS with .tbd files.
Our scientific user base is not in a hurry to upgrade to the latest Apple OS or upgrade to the latest release of our software until they are done with what could be years of research. Changing the OS that runs or the compilers that built our code means new floating point results may not be consistent with previous statistical analyses. People who have started their research work on Catalina, BigSur or Monterey may not upgrade to a subsequent major OS release for quite some time.
I’m looking through the cmake documentation and wonder if I’m missing some use of INTERFACE, PUBLIC or PRIVATE vars that could expose the shorthand notation (-lm) or expanded system library paths before they get written into the build.make, link.txt or project files.
Whichever project is hardcoding m to the list of link libraries should instead migrate to using imported targets instead to represent them. Something like this:
add_library(MyProject::libm UNKNOWN IMPORTED)
set_target_properties(MyProject::libm PROPERTIES IMPORTED_LIBNAME m)
Repeat for other system libraries as needed. This would need to be replicated to the -config.cmake file to make it available there too. This should abstract away the path between the build and consuming environments.
Unsolicited observation and advice on the situation as a whole though (as a complete outside observer). Feel free to ignore . I don’t think I’ll change anyone’s mind, but hopefully it’s something to think about for future projects.
It would seem to me that then choosing macOS as a base for which you cannot move for years would be inconvenient for research? How is anything supposed to be reproducible by anyone not keeping a couple un-updated Apple hardware devices stashed in a closet somewhere?
“Our scientific user base is not in a hurry to upgrade to the latest Apple OS or upgrade to the latest release of our software until they are done with what could be years of research. Changing the OS that runs or the compilers that built our code means new floating point results may not be consistent with previous statistical analyses. People who have started their research work on Catalina, BigSur or Monterey may not upgrade to a subsequent major OS release for quite some time.”
We also built scientific software but I think your conclusion might not be quite correct. Running software on different versions of macOS should give you exactly the same result assuming the analysis does not involve randomness. We run our software on 10.13 up through Ventura (13.0) and we get the exact same results in all of our unit tests. We compare our results against pre-existing files that were written on those older machines so we are confident this works correctly. This really goes for any operating system.
And also think about what you are saying, that if I update my operating system (NOT the hardware) that somehow calculations are going to be performed differently? While in theory that could happen that would be a bug in the compiler itself. As long as you are using Apple supplied compilers I have no problem feeling 100% confident that this isn’t a problem that I should ever have to deal with.
The real issue that you will face is when do you cut off support for those older systems. Our last gen software works back to 10.13. Our new gen works with 10.15 and above. Those scientists using those old machines at some point need to make a decision to either move up or buy new hardware (through their institution). The other choice is that YOU get to keep older hardware around that you can test your software on and now you get to take on the burden of that support. The choice is yours.
They don’t upgrade because upgrading has unknown quantities about it. They might not be able to get the funding to upgrade their hardware, their may be software policies in place that does not allow them to upgrade, they may be running some one else’s old software that does not work with new hardware.
We have a long, iterative, pipeline of floating point calculations where some commands can use a seed value as an argument. I’ve found Floating-Point Determinism | Random ASCII – tech blog of Bruce Dawson useful and have seen a presentation on how different compilers can change the order of speculative branching and depending upon optimizations, etc., the ordering of add/shift operations, intermediate results, etc. We have output that seems to indicate that when the compilers change, floating point results from the completed pipeline start to diverge after the 4th decimal place. Many users understand how changes to the compiler, operating system and hardware affect their work and determine when it works for them to upgrade. That being said, some still prefer to remain on old and unsupported versions of linux and MacOS. They will often upgrade if we add features/fixes they need and drop support for their current OS as the oldest platform to build on.
I am happy cmake still supports the -DCMAKE_OSX_DEPLOYMENT_TARGET for MacOS and continue to use that.