There would be inevitable bandwidth costs in updating like this, but that is the trade-off that is Explicitly made by choosing to go with static.
I don't think anybody would disagree, but you can't dismiss out-of-hand this required effort. The point is there are pros and cons. Its arguable that one really ought to have a build-server to mitigate the effect/work. For an OS/distribution, this would be a repository of binaries that are maintained, and you could do an (eg) apt-get update and have the proper software fixed (for your "enterprise" or similar software, a similar in-house mechanism) -- if everything is static, the act of replacing the binaries on the end-machine ought to be relatively simple for binary replacement, with the effort for library maintenance moved to maintaining an "out of band" record of what libs ea. app is using, so that when you have a flaw in libxyz that client-a, client-b, and client-c are using, you _know_ you need to update the source for client-[a-c] one way or another -- it boils down to a case of responsibility -- are you going to build safeguards into the link/run mechanism (dynamic libs) and have it adopt a certain amount of responsibility or move the cost upfront to build/maintenance and manage the responsibility yourself (with some other appropriate tooling)...
I don't think that's true. You could transfer only binary differences with bsdiff or something and if there are a lot of them with the same security update - you could go even farther and establish a single patch as a base and all the other patches as differences with the base (or other appropriate compression algorithm). Bandwidth should be very tiny.