Imagine you have an executable with a random library that has a global variable. Now you have a shared/dynamic library that just so happens to use that library deep in its bowels. It's not in the public API, it's an implementation detail. Is the global variable shared across the exe and shared lib or not? On Linux it's shared, on Windows its not.
I think the Windows way is better. Things randomly breaking because different DLLs randomly used the same symbol under the hood is super dumb imho. Treating them as black boxes is better. IMHO. YMMV.
> No import lib (typo! lib, not line)
In Linux (not the kernal blah blah blah) when you link a shared library - like glibc - you typically link the actual shared library. So on your build machine you pass /path/to/glibc.so as an argument. Then when your program runs it dynamically loads whatever version of glibc.so is on that machine.
On Windows you don't link against foo.dll. Instead you link against a thin, small import lib called (ideally) foo.imp.lib.
This is better for a few reasons. For one, when you're building a program that intends to use a shared library you shouldn't actually require a full copy of that lib. It's strictly unnecessary by definition.
Linux (gcc/clang blah blah blah) makes it really hard to cross-compile and really hard to link against older versions of a library than is on your system. It should be trivial to link against glibc2.15 even if your system is on glibc2.40.
> global system shared libraries
The Linux Way is to install shared libraries into the global path. This way when openssl has a security vuln you only need to update one library instead of recompile all programs.
This architecture has proven - imho objectively - to be an abject and catastrophic failure. It's so bad that the world invented Docker so that a big complicated expensive slow packaging step has to be performed just to reliably run a program with all its dependencies.
Linux Dependency Hell is 100x worse than Windows DLL Hell. In Windows the Microsoft system libraries are ultra stable. And virtually nothing gets installed into the global path. Computer programs then simply include the DLLs and dependencies they need. Which is roughly what Docker does. But Docker comes with a lot of other baggage and complexity that honestly just isn't needed.
These are my opinions. They are not held by the majority of HN commenters. But I stand by all of them! Not mentioned is that Windows has significantly better profilers and debuggers than Linux. That may change in the next two years.
Also, super duper unpopular opinion, but bash sucks and any script longer than 10 lines should be written in a real language with a debugger.