"It is ridiculous that this has been a known problem for so long. It has wasted thousands of hours of people's time, either debugging the problems, or debating what to do about it. We know how to fix the problem." https://www.evanjones.ca/setenv-is-not-thread-safe.html
30 years after these decisions were made, most sensible people do single threaded GUIs anyway (that is, all calls to the windowing API come from a single thread, and all redraws occur synchronously with respect to that thread; this does not block the use of threads functioning as workers on behalf of the GUI, but they are not allowed to make windowing API calls themselves).
Consequently, the overhead present in the win32 API is basically just dead-weight, there to make sure that "things are safe by default".
There's a design lesson here for everyone, though precisely what it is will likely still be argued about.
"If you detached a thread in your application using a non-Cocoa API, such as the POSIX or Multiprocessing Services APIs, this method could still return NO."
Also, I've never heard of this behavior despite years developing for macOS (admittedly tangentially). I don't see how that could work given that threads can come and go during the life of the application.
How much overhead is it though? IIRC uncontended mutexes are practically free, especially when they're only being used from a single thread.
Our industry is way too eager to make things unsafe for the sake of marginal performance differences that are irrelevant for most use cases, IMO.
You could wrap setenv in a mutex, but that's not good enough. It can still be called from different processes, which means you'd need to do a more expensive and complex syncing system to make it safe.
That ballons out to other env related methods needing to honor the synchronization primitive in order for there to be a semblance of safety.
However, you still end up in a scenario where you can call
setenv
getenv
and that would be incorrect because between the set and the get, even with mutexes properly in place and coordinated amongst different applications, you have a race condition where your set can be overwritten by another application's set before your get can run. Now, instead of actually making these functions safe you've buried the fact that external processes (or your own threads) can mess with env state.The solution is to stop using env as some sort of global variable and instead treat it as a constant when the application starts. Using setenv should be mostly discouraged because of these issues.
Making global state, especially state that has no reason to be modified or even read very often like the env, thread safe is a trivial issue, well studied and understood. Could an intern do it? Probably not. Could literally any maintainer of a standard C library? Easily.
This is much more of a culture problem preventing such obvious flaws from being recognized as such.
Side-note: your set-then-get example is a theoretical problem in search of a use case. Why would you ever want to concurrently set an env var and expect to be guaranteed to read that same value? And even if this is a real thing that applications really use, exposing a new function to sync anything on the env mutex is, again, trivial. So, if you really needed that, you could do
lockenv
setenv
getenv
unlockenv
And problem solved.This needs to be fixed inside libc, but there's no way to do so completely without breaking backward-compatibility.
Its a different story for languages/environments that are supposed to be safe by default and where you have language features that ensure safety (actors, optionals etc) but not for something like libc which has a standard it has to conform to and like 100 years of history.