My objection is that you are advocating this in the same narrow-focused way that people advocate node with npm, python with pip, ruby with gem: little or no cooperation with the whole system is available yet. This is perfectly fine from the point of view of a group which does one thing, but not from my point of view, running large numbers of diverse systems.
When libfoo gets updated, all N packages on the system which use it via dynamic linking gets the benefit as soon as the packages restart. This is highly desirable.
If Go-libfoo is updated, each of those N packages needs to be rebuilt, but I don't have a programmatic way of finding out.
If there are N teams developing those packages, some of them will be faster off the mark than others, and now I have a window of vulnerability that is larger than the one I had when I could update libfoo on day 1.
You have multiplied my workload. I won't do that without a really good reason.
But I notice you didn't respond to my SNMP point, which is disappointing, because I was hoping that at least some fake Internet points might accrue to my otherwise fruitless efforts at implementing SNMP from scratch three separate fucking times. Can I at least be rewarded for that by winning a dumb message board argument!?
There's even a cool trick to implementing BER encoders I could have talked about!
Instead, it looks like the thread is going to be about dynamic versus static linkinnzzzzzzzzzzzzzzzzzz.
This is IMHO the most backwards logic ever. Everything about dpkg and apt makes this process easy, from running an internal custom packages repository, through to "apt-get source" for any system package on a moment's notice, through to having everything just magically revert back to Debian-patched versions as part of the normal upgrade process assuming you versioned your custom packages carefully.
A well run Debian shop is a thing to be seen, unfortunately it's not cohesively documented in any one location on the Internet. If there's any problem encountered in the wild on the Internet, after 22 years, there is almost certainly a solid process built into Debian to handle it.
Compare that to home directories full of tarballs of binaries with dubious compiler settings and godknows what else, I have no idea why someone would advocate against it, assuming of course they've actually done sysadmin anywhere aside from the comfort of an armchair
The problem is that you have to wait for the patch to be bundled. I've watched that take a long time, while services I knew to be vulnerable had to sit there and be vulnerable because the organization deploying the service didn't have any infrastructure to apply a custom patch.
Consider the degenerate case, where you have to wait for a Debian patch because you paid for the research that found the vulnerability. More than one of my clients wound up in that situation. But that's not the only way to learn about a simple, critical source patch that won't land in a Debian patch for days.
$ apt-get source bash
$ cd bash*/
$ quilt new my_urgent_patch
$ patch -p1 < ~/my-urgent-patch.diff
$ quilt add file1 file2
$ quilt refresh
$ dpkg-buildpackage ...
$ dupload ../*.changes
# trigger apt-get upgrade on target machinesThat's a misunderstanding. What Debian, and every other mature Linux distribution, gives you are the tools to not only rebuild a package on a moments notice (try to build any non trivial third party package sometime, and compare that to rebuilding the Debian package) but also keep track of those patches over time (where did it originate? bug id? upstreamed yet?) and keep a bird's eye view over deployment (which nodes? when?). You need to ask yourself those questions, because your auditor will.
Good for you to implement SNMP, and using the f word in writing, but maintaining infrastructure is something else. Your reason to not use Debian for critical infrastructure should be because of contractual liabilities and/or support reasons, its build tools and associated policies are solid. It's not the only way to roll, but it's a perfectly valid one.
i don't even
care
anymore.
sorry, but i still don't grok it. for example, if you take a person 'object' defined as :
Person {
name string (or equivalent asn.1 type-name, with type-identifier == 1)
age int (or equivalent asn.1 type-name, with type-identifier == 2)
}since b.e.r is basically a tlv (type-length-value) encoding, a person with name "james" with age '10' i.e.
james_person = Person(name = 'james', age = 10)
gets hex-encoded as :
"james" : 01 05 6a 61 6d 65 73
"10" : 02 01 0A
so the whole thing looks like this:
"01 05 6a 61 6d 65 73 02 0A".
ofcourse this would be prepended with appropriate type-number for 'Person' with corresponding length.
if we assume that 'Person' gets a type-identifier == 3, then 'james_person' instance would be encoded as:
"03 06 01 05 6a 61 6d 65 73 02 0A"
where '06' == total length (6 bytes) of this instance of person object.
may you please elucidate your trick with the above example ? thanks for your insights !
And yes, I agree. I think we're to the point now where sysadmin-style "I'll wait for Debian to entmoot on this TLS vulnerability and eventually drop something" server maintenance is on its way out for those who operate cattle. Especially high-visibility cattle, as you imply. The industry is teasing the post-distro world into existence but doesn't yet know what it's dealing with on that point. I don't even consider CoreOS a distribution, for example; I think it's more of what Linux will look like in several years time for cattle herders, while Debian and friends will continue to go in pretty hard on pets.
Dynamic linking creates more problems than solutions in a cattle fleet as opposed to a pet fleet. People who philosophically argue for one or the other are expressing their preference for how to administer a server and do not realize that it is a preference, and not "correctness," per se. The package manager argument is the same way. Execute the code and get it done, or be "correct" and only apply updates through RPM. Cattle, pets. It's all cattle and pets, and the arguments that spawn between the cattle camp and the pet camp will never be resolved, this thread included. People need to realize this, that there is not one way to operate a server, and my way is not more correct than your way.
My way, for example, comes with the baggage of an expected organizational structure to enable its mission. That's not always easy, and I understand that. I can say, however, that the SRE/cattle way makes a hell of a lot of sense at scale.
Apparently gcc seems to be the only C compiler lacking this capability, thanks to glibc.
Also, you can use dynamic linking in Go since version 1.5.