The whole point of using Rust or Go instead of C is that the "peril" of implementing things like ASN.1/BER is pretty much eliminated.
As for your former point: I don't follow. Go's deployment infrastructure is a superset of C's, and, if you're a masochist, almost everything in C's deployment toolkit is available to Go projects as well.
My objection is that you are advocating this in the same narrow-focused way that people advocate node with npm, python with pip, ruby with gem: little or no cooperation with the whole system is available yet. This is perfectly fine from the point of view of a group which does one thing, but not from my point of view, running large numbers of diverse systems.
When libfoo gets updated, all N packages on the system which use it via dynamic linking gets the benefit as soon as the packages restart. This is highly desirable.
If Go-libfoo is updated, each of those N packages needs to be rebuilt, but I don't have a programmatic way of finding out.
If there are N teams developing those packages, some of them will be faster off the mark than others, and now I have a window of vulnerability that is larger than the one I had when I could update libfoo on day 1.
You have multiplied my workload. I won't do that without a really good reason.
But I notice you didn't respond to my SNMP point, which is disappointing, because I was hoping that at least some fake Internet points might accrue to my otherwise fruitless efforts at implementing SNMP from scratch three separate fucking times. Can I at least be rewarded for that by winning a dumb message board argument!?
There's even a cool trick to implementing BER encoders I could have talked about!
Instead, it looks like the thread is going to be about dynamic versus static linkinnzzzzzzzzzzzzzzzzzz.
This is IMHO the most backwards logic ever. Everything about dpkg and apt makes this process easy, from running an internal custom packages repository, through to "apt-get source" for any system package on a moment's notice, through to having everything just magically revert back to Debian-patched versions as part of the normal upgrade process assuming you versioned your custom packages carefully.
A well run Debian shop is a thing to be seen, unfortunately it's not cohesively documented in any one location on the Internet. If there's any problem encountered in the wild on the Internet, after 22 years, there is almost certainly a solid process built into Debian to handle it.
Compare that to home directories full of tarballs of binaries with dubious compiler settings and godknows what else, I have no idea why someone would advocate against it, assuming of course they've actually done sysadmin anywhere aside from the comfort of an armchair
That's a misunderstanding. What Debian, and every other mature Linux distribution, gives you are the tools to not only rebuild a package on a moments notice (try to build any non trivial third party package sometime, and compare that to rebuilding the Debian package) but also keep track of those patches over time (where did it originate? bug id? upstreamed yet?) and keep a bird's eye view over deployment (which nodes? when?). You need to ask yourself those questions, because your auditor will.
Good for you to implement SNMP, and using the f word in writing, but maintaining infrastructure is something else. Your reason to not use Debian for critical infrastructure should be because of contractual liabilities and/or support reasons, its build tools and associated policies are solid. It's not the only way to roll, but it's a perfectly valid one.
And yes, I agree. I think we're to the point now where sysadmin-style "I'll wait for Debian to entmoot on this TLS vulnerability and eventually drop something" server maintenance is on its way out for those who operate cattle. Especially high-visibility cattle, as you imply. The industry is teasing the post-distro world into existence but doesn't yet know what it's dealing with on that point. I don't even consider CoreOS a distribution, for example; I think it's more of what Linux will look like in several years time for cattle herders, while Debian and friends will continue to go in pretty hard on pets.
Dynamic linking creates more problems than solutions in a cattle fleet as opposed to a pet fleet. People who philosophically argue for one or the other are expressing their preference for how to administer a server and do not realize that it is a preference, and not "correctness," per se. The package manager argument is the same way. Execute the code and get it done, or be "correct" and only apply updates through RPM. Cattle, pets. It's all cattle and pets, and the arguments that spawn between the cattle camp and the pet camp will never be resolved, this thread included. People need to realize this, that there is not one way to operate a server, and my way is not more correct than your way.
My way, for example, comes with the baggage of an expected organizational structure to enable its mission. That's not always easy, and I understand that. I can say, however, that the SRE/cattle way makes a hell of a lot of sense at scale.
Apparently gcc seems to be the only C compiler lacking this capability, thanks to glibc.
Also, you can use dynamic linking in Go since version 1.5.
That is true; but updates to native libs that would propagate to C, Python &al. will generally not propagate to Go because, like Java, Go eschews native bindings.