As a programmer, I now don't need to care about dropping privileges, managing logging, daemonization (the moment I need to do the double-fork dance again, chairs will be flying, I swear), dropping into a chroot, and do half-arsed guesses "is it up and running yet?" from a convoluted mess of shell code that looks like a bunch of hair stuck down a drain for a month.
I just write an easy-to-debug program which I can launch from command line and see it run, and when I'm satisfied, a systemd unit from a cookie-cutter template is going to make it run. Service dependencies are now a breeze, too.
If I need to limit resources, I can just declare the limits in the unit. If I want a custom networking/mount namespace, it's taken care of.
I wrote about this previously here: https://kevincox.ca/2021/04/15/my-ideal-service/#socket-pass...
Or how about adding new buggy DNS code that doesn't work in common scenarios? Oh, sorry, here's another CVE because we didn't create enough test cases for the corner cases that are actually important.
Or oops, "nobody uses ntsysv", right? Or "We don't need to implement chkconfig even though we broke it".
systemd is a monolithic beast that is absorbing everything else in the system without considering that some of its decisions should be able to be disabled, and a lot of the design decisions are half baked. I don't believe in the philosophy of design its author has embraced. Progress is good, but please, stop breaking shit that has worked for decades. Anyone can write new code that partially implements a feature, but it takes real effort to responsibly migrate users from tools that worked to your new shiny half assed kitchen sink.
It may be great for less-skilled people, but for anyone running anything where it's too critical to outsource support it's then necessary to have a systemd expert inhouse (and such a person has proven extremely hard to find).
I can/could do without timesyncd and resolved (it's easy - just use chrony e.g.) but I like udevd being now part of systemd. It would be nice to not write /etc/udev/rules.d/ and instead have a foo.udev unit type, perhaps that is in our future (we do have .device units, but it's not the same - yet? the future). In this ballpark I think it's more on each distro picking and choosing - Ubuntu for example drank the kool-aide much deeper than RHEL - RHEL for example uses chrony out of the box, not timesyncd. However udevd and logind seem to be common across all distros now, as another user commented the KillBackground=yes setting in logind is just horrible to have as a default. The whole "homed" thing makes me sad that it's even being coded, I hope nobody adopts that (I dislike it for the same reason I dislike automount); someone out there wants it though.
The ability to dynamically edit a unit (systemctl edit) and to dynamically alter the running service constraints (systemctl set-property), all PID file type needs are handled in /run (getting rid of the nasty SysV stale unexpected crash reboot pid problem which many scripts failed to handle properly). Users having the ability to use their own private init items (systemctl --user) is great - timers, socket activation, custom login units, all very well extended down into the user's control to leverage. I'm sort of 50/50 on cron vs. timers, that's more of a use case by use case decision (example: tossing a https://healthchecks.io "&& curl ..." is just a lot quicker and easier in cron, but running a dyndns script on my laptop with a timer is nicer).
Touching on systemctl edit, it's really easy now to show folks (think a DBA team who only has the fundamental ops skill) how to quickly chain their After= and Before= needs for start/stop of their (whatever) without having to go down a rabbit hole - it's simple to use, the words and design are accessible and familiar, the method by which it works is a little obtuse (it's rooted in understanding the "dot-d" sub-include design pattern). On RHEL at least it uses nano as the default editor, annoying to me but good for casual non-vim users and easy enough to override using $EDITOR.
I used SysVinit for all the same years as everyone else (Solaris to Debian to Red Hat, ops touches it all) and wrote many my fair share of complex init units to start DB2, Oracle, java appservers (anyone remember ATG Dynamo?); systemd handles natively what 75% of that work was/is (managing PID files, watching/restarting failures, implementing namespaces/cgroups, handling dependency chains, etc.); for those complex scenarios (looking at you, Tomcat) you can still just have a unit launch a very complex shellscript "like in the old days". I haven't looked in awhile, but last time I knew in RHEL7, Red Hat did exactly that with Tomcat - just had the systemd unit launch a script.
It is, however, a real bear to debug sometimes - it's far easier to "bash -x /etc/init.d/..." and figure out what in the world is going wrong than it is to debug systemd unit failures. But, the same holds true for trying to debug DBus (if you've never tried / had to, it's not fun at all without deep dbus knowledge). I would like to see the future add more ops-oriented debugging methodology - if you've every used "pcs" (the commandline tooling for Pacemaker offered by RHEL), we could really use "systemctl debug-start" type of interfaces to the commandline offer the same experience as "bash -x" days of old. There are debug settings, they're just not ergonomically dialed in for the ops user, IMHO - systemctl debug-start would save people a lot of headaches.
This is, of course, not a problem, because as systemd folk are wont to point out, systemd is not, in fact, monolithic, meaning they use well-defined interfaces and can be swapped out for an alternative.
2. Some of the systemd logging is binary, so good luck with that if there's a problem.
3. Have you tried non-systemd init systems other than sysvinit?
4. Yes, it is convenient when everything below your development is cenrtalized by a single entity. It can easily provide a consistently useful underpinning. But there's a price - overly strong coupling of the init system, the kernel and part of the user-space, centralized control of, well, almost all of how things work on the system, and stagnation of the ecosystem due to there being only one game in town.
What other system is there, right now, that can do this so well?
I'm definitely onboard with the issues around tight coupling, I'm really not a fan of binary logs etc. But the unit files are pretty awesome IMHO.
So serious question - what else does those as well or better?
(Not to mention UNIX has log files that have been in a binary format since time immemorial, like utmp and wtmp.)
You can do the same painless setup (arguably even easier) with runit as the base template requirement is literally just
#!/bin/sh
the_executeable &
Granted, logs in runit are optional but there are problems with default logging too, e.g. Docker will keep filling logs until the disk is full unless you explicitly tell it not to in either its configuration or your own custom log rotation rules. Neither of which are default.I remember when it took multiple days testing the configuration on different distributions, editions and versions to just get a single daemon process to start without failure. Then do the whole thing over again because the Debian based distros did not use the RedHat startup tools, different (or no) runlevel configurations, different service binders, different NTP daemon, different terminal deamon, etc.. And of course the French customers wanted the service to work with Mandriva, the German customers want SUSE support with extended security options like dropping permissions after boot.
Just like the article mentions you can define a portable service model with security and failure handling built in. There wasn't even anything that came close back in the day. Systemd may not have been built with the Unix philosophy in mind, but at some point that becomes a secondary concern.
Systemd unifies all systemd resources in units which work anywhere, its expandable and extendable, user friendly, allows for remote monitoring etc.
Its not just breaking init.d scripts, it’s ntp, dns, syslog. Systems throughout the OS fail to things that were no longer short commands with muscle memeory, there were now ridiculous convoluted commands like systemd-resolve --status instead of 30 years of typing cat /etc/resolv.conf
Even when you remember and type that in, you don’t get a simple list of nameserver and host, you get 100 lines of text you have to spend effort parsing to work out what’s going on.
When it’s less mental effort to run tcpdump port 53 to see where your DNS is going, there’s a problem.
For decades it was /etc/init.d/myserice restart
Now is it systemctl restart myservice or systemctl myservice restart? I have no idea as I’m not at a computer.
Or the restart fails it doesn’t tell you why, it gives you two locations to look for log files about why it might have broken. Init.d scripts didn’t do that. Even if there was something really wrong that log files don’t reveal, running init.d with bash -x allowed easy debugging
Systemd came in and changed working processes and systems and gave very little benefit to people with working processes and systems from a operator point of view.
> there were now ridiculous convoluted commands like systemd-resolve --status instead of 30 years of typing cat /etc/resolv.conf
systemd-resolved is not enabled by default in Debian and many distributions, and it is not needed in any way by systemd. If you don't like it, don't use it!
Your rant does not sound very serious. Did you really have "ntp" or "syslog" in your muscle memory? That's strange, because most syslog daemon did not have a `syslog` command.
Anyway, systemd-resolved was created because it has uses. And for systems that used a dns cache (dnsmasq, etc), rejoice, because the config is now simpler than it was.
> For decades it was /etc/init.d/myserice restart
> Now is it systemctl restart myservice or systemctl myservice restart? I have no idea as I’m not at a computer.
Before systemd, at least on Debian, for a few years the recommended way was NOT calling `/etc/init.d/something`, but instead `service apache restart`. Since sysv was unsuitable for many uses, several alternatives emerged, like "runit", or "upstart" for Ubuntu. So, before systemd, the recommended way changed with the distribution.
Thanks to systemd, most linux installs now use `systemctl restart service1 service2`. Note that you can now act on multiple services at the same time. You can use this feature as a mnemonic.
> Or the restart fails it doesn’t tell you why, ... Init.d scripts didn’t do that.
In many cases, init.d scripts told you nothing when they failed. Each service has its own procedure. Nowdays you can always see what happened with the command systemctl prints on failure.
And `systemctl cat s1` display starting instructions that are rarely longer than a dozen of lines. I remember init.d scripts that were hundreds of lines long, and awfully hard to understand.
As a sysadmin, for me things like that were very, very minor issues.
The main problem was that systemd had awful documentation, written by people who'd clearly never had to use systemd in anger and just assumed that everything would work swimmingly (and please don't say read the man pages.. those are barely adequate).
When things broke there were no simple and obvious ways to fix it, you had to dive in to its labyrinthine spaghetti architecture and hope and prayed you somehow got the Rube Goldberg machine to work.
Hopefully that's improved by now, and there's some canonical documentation that really shows you how it all fits together and how to fix it when it falls apart.
* Services started with sysvinit would put logs where they want, which is fine if you know where they are but per-service you might be guessing. Having everything always in the same place is handy.
* sysvinit wasn't giving you any of these security benefits.
* If your system really was working fine before, why did you need to upgrade it to a newer distribution with systemd?
You should always use "service myservice start" instead of "/etc/init.d/myservice start". Running "/etc/init.d/myservice start" directly means the service ends up accidentally inheriting parts of your shell's state (environment variables, limits, current directory), which is a different environment from when the service is started at boot. The "service" command carefully cleans up most of the environment before running the script, making it much more similar to what will happen when it starts automatically on next boot.
And if you were used to "service myservice start", it now automatically forwards to "systemctl start myservice" when the /etc/init.d/myservice script does not exist, so it keeps working nearly the same after the transition to systemd.
Some things are a matter of preference, but this bit is just wrong. Init.d was hilariously worse. Some services had their own configuration locations, some had those exposed via /etc/defaults, some used syslog, some redirected stdout/stderr, some redirected one and discarded the other.
You're right that there are two places now - logs are either in the journal or in app-specific log location. And stdout/err go to the journal. Those 2 places mean fewer places to look through than we had before.
Thing is, it was never this command, it was always https://linux.die.net/man/8/service , but your command also worked in 99% of the times, until it didn't and restarted service misbehave. Systemd streamlined whole experience.
From what I remember, everyone else on UNIX world was already following up on systemd like systems before it got adopted on Linux world, and on Red-Hat/SuSE distributions daemon scripts existed for quite a long time as well, so no it wasn't * /etc/init.d/myserice restart* for decades.
Actually long before that it was service restart myservice, which still works.
For logging, can't you write a systemd service in bash with -x flag?
The best solution is to maintain infinite bash history and stop trying to remember arcane stuff.
Don't even get me started on tools like jq - so convoluted that I use python instead
I felt so alone in the world until this moment
However alpine is working on a similar thing based on s6 but with modularity and lightweightness as design goals. This sounds great to me. I'm not against the idea of a service manager, but I think systemd is overreaching.
I'm sure many people - especially developers who use OSX all the time - love systemd. That's fine, but people won't love systemd, just like solaris people learnt to accept linux.
I'm also sure not everyone loves it. Some have moved on, some haven't but now tolerate it, they've spent the time to cope with it, maybe it's costing them more time every day than pre-systemd, but it's not big enough problem to move on. That's just life. I'm sure some people didn't like it when program manager was replaced by the start button in NT 4 either.
It seems that systemd fanboys just can't acknowlege some people don't like their new world order, which is rather sad in itself.
What's missing from docs like [0] that makes you say that?
[0] https://www.freedesktop.org/software/systemd/man/systemd.uni...
Yeah, and it is not only underspecified, but too weak to be useful, which just pushes all the init.d logic somewhere else. What you've accomplished is moving it somewhere nonstandard, great job.
Also, the command line ergonomics suck. Systemd is deeply unfriendly to humans.
It was a power play in support of a long-term RH strategy, supported by a lot of bad faith arguments. Fascinating to watch as sociology, less nice as a forced-user.
What do you mean? Why the scare quotes? I'm not aware of anything in .service and related files which isn't declarative.
1. People to young to remember stable software. 2. People who have given up, and just accepted that Linux too "just needs a reboot everynow and then to kinda fix whatever got broken".
systemd has normalized the instability of shitty system software. And just like how you don't see front page news every day about 1.3M traffic deaths per year because it's not news, you don't see people up in arms about shitty Linux system software.
It's normal now. It didn't use to be.
Yes, ALSA is better than OSS, and then PulseAudio and now pipewire. It can do more. But when did it become acceptable to get shit, just because the shit could do more things?
Pipewire is not bug free (I have a bug that's preventing me from an important use case), but it's sure more reliable than PulseAudio, while still being more capable.
So maybe Pipewire is showing a trend towards coders actually giving a shit?
We still cannot block systemd from making a network socket connection so security model is shot right there by the virtue of systemd running as a root process.
In the old days of systemd, no network sockets were made.
Systemd has become a veritable octopus.
Now, I use openrc and am evaluating S6.
What do you mean by "security model" in this case? What model is that?
They might not have been better or more robust, but they where easier to understand and reason about. You could explain the entire thing to the most junior of sysadmins in a few minuets, tell them to read the boot scripts, and they would basically understand how everything worked.
And a few months ago I retried that with systemd. It's really just about 10 line of configs. And you are done.
Besides that, it also has a build-in scheduler with a command for you to tell when was the task runs, did it success? And what about the outputs. Although you could say it is just a cron replacement with better ui. But why no? I don't really care about the unix philosophy, I just care what do solve the problem for me.
Things would have been fine for a lot of people if they had stopped at SysV script replacing, and general start up.
At this point, with all the additional functionality continuously being added, I'm waiting for systemd to fulfil Zawinski's Law:
> Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.
Systemd is 10 years in making, and still manages to brick production servers.
The problem is not with SystemD or its coding as such, but the ideology it came with, and bad developers who push it.
The last attempts to make it saner basically reverted it back to sysvinit. So, not much difference now.
Uhm, how so?
Like when logind was changed to kill background processes when you log out, by making KillUserProcesses=yes the default. Some Linux distros left that as is, others overrode it in /etc/systemd/logind.conf. So, figuring out what was happening, and how to fix it, was confusing. I had no idea it would have been logind doing that.
Similar for changes introduced with systemd-resolved.
And it leaves enough info (process is it's own separate session group) for logind to know it should leave it alone.
The ideas are not inherently bad. But they're not thought through, and the implementation is pure garbage.
Like taking the most stable software in the world[1], and going "nah, I'll just replace it with proof-of-concept code, leaving a TODO for error handling. It'll be fine.".
And then the "awesomeness" of placing configuration files whereever the fuck you want. Like /lib. Yes, "lib" sounds like "this is where you place settings file in Unix. At least there's no other central place to put settings".
[1] Yes, slight hyperbole. But I've not had Linux init crash since the mid-90s, pre-systemd
This provides a clean separation between the default configuration and the user configuration.
Can you explain why this is a bad thing?
A counter-example that comes to mind is when a package upgrade requires manual intervention due to file conflicts in /etc. That's what happens when the packager's default configuration interferes with the user's custom configuration.
> This provides a clean separation between the default configuration and the user configuration.
> Can you explain why this is a bad thing?
The normal way to do this is to put the default configuration in /usr/share/.
It's not. That's my point.
systemd does ask some good questions. E.g. I think the logfile situation needed a major shake-up in unix. Too many log file formats, in text, often completely unparsable (if you're lucky then a regex will work for 99.99% of log lines, but not all), and all unique. And the same mistakes being made over and over again. E.g. "oh, we don't log timezone", or even "meh, it's up to the user to parse the time with timezone correctly, even though things like 'MST' is not even unique".
But did systemd fix that? No. It's just that now I have logs in journalctl AND nginx, AND a bunch of other files. Thanks, standard number 15 that was supposed to unify it all. If you build it, they won't just come. Especially when the implementation is bad.
Believe it or not, the above is actually the pro-systemd argument.
Now, for what you describe: Yes. Exactly. I'm saying systemd DOESN'T do this. I'm saying this is a large successful part of Unix, that systemd ignored.
> A counter-example that comes to mind is when a package upgrade requires manual intervention due to file conflicts in /etc. That's what happens when the packager's default configuration interferes with the user's custom configuration.
Is that actually a problem? Maybe I've been spoiled by Debian, but with the combination of the upgrader showing the diff, and `foo.d` config directories, I've never had this problem in about 25 years of running Debian.
But I believe you if you say that others have this problem, and believe you when you say it's real. But how exactly is systemd fixing it? It doesn't resolve the conflict, if it works like you say where it's a per-file override. That sounds like breaking the system (usually in subtle ways) instead of hilighting the newly arrived inconsistency. That's just sweeping problems under the rug.
It's not enough to say vaguely that "this is different, so probably solves some problem, somehow. And it didn't cause me personally a problem, so fuck everyone else".
Perhaps blame lays on the distro packager, but still, it ends up to a user being strange.
I my experience, systemd config is simple because it handles all the complexity. Inside it’s guts, it is much more complicated than a sysv system — naturally so because it can do so much more. Those folks loves using the latest and greatest kernel function for all its glory.
All works well - until it don’t. When something is broken, suddenly you have to understand all the interdependent components to debug.
Back in the days, these were not so uncommon, because bugs or simply unimplemented features………
Here's an example: Someone read that fd-passing is a thing, so now systemd listens to just about everything and spawns stuff on-demand.
Now, that may seem like a good idea, if you think it up in a vaccuum and don't have experience with the real world. It's a great idea, if you're in high school. But to have it actually accepted? WTF is even happening?
Oh, let's do this for time sync from GPS. Great. All that time that could have been spent verifying the signal and all, completely wasted, because some jerk thought that it's better to waste 15 minutes of the human waiting, just to save 100kB of RAM.
It's a monumentally bad idea.
And more specifics: Like I said, when you replace init you need to have it not crash.
And then restarting daemons with systemctl almost to a rule fails, and fails silently. Often I have to just kill the daemon manually, and systemctl start it again.
But people aren't complaining about systemd anymore because now there's two kinds of people:
1. People to young to remember stable software.
2. People who have given up, and just accepted that Linux too "just needs a reboot every now and then to kinda fix whatever got broken".
But maybe the trend is changing? Pipewire looks like it's not actually shit (unlike PulseAudio which has plagued us forever), and while it has some important bugs in edge cases, it's actually more reliable than what it's replacing(!)
> As written, your statement is unlikely to convince anyone that isn’t yet already.
It's hard to convince people who don't care. Or indeed those who don't know that no, actually, short of a kernel upgrade "reboot to fix that problem whenever it happens" is not normal, and is a serious bug.
Pre-systemd Linux had as a selling point that it's actually stable, compared to Windows at least. But Windows has gotten much better in the past decade in reliability, and Linux much worse.
systemd is on the level of a re-think by a pretty bright high school student. And that's not a good thing. It's a very bad thing.
> to be convincing, it would have compare something like bug density to the software projects that collectively replaces
You're asking me to be data-driven, while being fully aware that systemd isn't, right? Your argument is essentially fallacy by implying that status quo is data-driven.
It's hard to take your suggestion at face value. Especially with many of the same people pushing systemd at the time making up shit like "We know that Unity is objectively the best user experience in the world"[1] (that's why it lost, because nobody liked it, right?[2]).
At the same time I also fall into group (2), above. I don't have time to wrestle in the mud with people who don't care.
[1] a quote like that, I may not have gotten the words just right. but the word "objectively" (without data) was there. [2] and I don't even particularly care about window managers. Before Unity I hadn't bothered switching from "whatever the default is on this system" in most cases.
While systemd has a bunch of container-related functionality, it does not integrate well into the Kubernetes or even Docker workflow. It's used very little in those environments.
If you are building CoreOS or NixOS system images, or traditional Linux system services, then systemd matters. But I think way more services are being built for the container world where these problems are solved differently.
For example, the TLS configuration can be handled with common container patterns. The author's startup example would translate more easily to a full-blown Kubernetes environment once the VC funding hits their bank account if they had used containers from the start instead of first writing the service for systemd.
It's a shame because systemd is very powerful and I've enjoyed using it.
Without (Docker) containers it is:
- build Go binary and install it in production server
- write and enable the systemd unit file
With (Docker) containers it is:
- write Dockerfile
- install Docker in production server
- build Docker image and deploy container in production server
I get the appealing of containers when one production server is used for multiple applications (e.g., you have a Golang app and a redis cache), but the example above I think containers a bit of an overkill.
* have a production outage because your libc was updated and now your go apps (which are dynlinked against it by default) won’t start
* mess around with low level cgroup settings if you need to oversubscribe safely
* cry in a corner the second you also need some python libs installed to do some machine learning or opencv or whatever on the side
Where you deploy your EAR/WAR file doesn't matter, so the application container can be running on Windows, any UNIX flavour or even bare metal, what matters is there is a JVM available in some way.
Also on the big boys UNIX club (Aix, HP-UX, Solaris,...) systemd like alternatives have been adopted before there was such an outcry on GNU/Linux world.
On cloud platforms if you are using a managed language, this now goes beyond what Java allowed.
You couple your application with a bunch of deployment configuration scripts, and it is done, regardless of how it gets executed in the end.
The cloud is my OS.
That's another problem to be solved.
There is no need for PrivateTmp= or some of the other configuration shown in this article because the application container already runs in a separate environment.
I think this is worth considering with respect to this article, even though containers definitely bring their own problems.
Putting a huge complex piece of software between yourself and "complexity" doesn't make the system less complex.
I sympathize with the "transition sucks" sentiments elsewhere on this post. Having a bunch of working scripts turned into instant technical debt cannot be pleasant.
But, as with python3, systemd seems to be the way things are headed.
One point is that processes other than root cannot start services on ports < 1024. That was a sensible precaution computers where big and multiuser, like in a university setting.
However, with single-serving services (e.g. in vm/container/vps/cloud), there is no need for it.
BSD lets you configure it with a sysctl option. But Linux defends that option like it is still 1990.
On NixOS, I patch it like this:
boot.kernelPatches = [ { name = "no-reserved-ports"; patch = path/to/no-reserved-ports.patch; } ];
With the patch just as big: --- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1331,7 +1331,7 @@
#define SOCK_DESTROY_TIME (10*HZ)
/* Sockets 0-1023 can't be bound to unless you ares uperuser */
-#define PROT_SOCK 1024
+#define PROT_SOCK 24
#define SHUTDOWN_MASK 3
#define RCV_SHUTDOWN 1You'll be defining your own systemd units with ease.
systemd to you will be journalctl and systemctl. So pretty good.
I use NixOS but certainly not love systemd. Instead, I've created a way to replace it with s6.[1]
1: https://sr.ht/~guido/nixos-init-freedom/
Cheers, Guido.
How does NixOS make that easier?
For example, systemd by default permanently gives up restarting services after a few number of tries (e.g. 5), even if you have set `Restart=always`. This is suboptimal for web servers that should recover by themselves after arbitrarily long failures (e.g. network downtimes outside of your control).
On NixOS, you can, from your machine config, set:
systemd.services.nginx.unitConfig.StartLimitIntervalSec = 0;
This sets/overrides just that specific systemd option for the existing nginx module. On other distros, you often have to resort to global mutation in `/etc` that does not compose well.We use NixOS for our infra (having used Ansible before), and this ability to override anything cleanly and keeping defaults otherwise made for much easier to maintain infra code and less ugly/surprising compromises.
when you have to run "systemctl disable systemd-timesyncd systemd-resolved systemd-networkd" to start to get back to sanity, that's not init
Systemd docs: https://www.freedesktop.org/software/systemd/man/index.html
List of directives: https://www.freedesktop.org/software/systemd/man/systemd.dir...
Unit-specific configuration: https://www.freedesktop.org/software/systemd/man/systemd.uni...
Service-specific configuration: https://www.freedesktop.org/software/systemd/man/systemd.ser...
Timer-specific configuration: https://www.freedesktop.org/software/systemd/man/systemd.tim...
The LoadCredentials thing reminds me of configmaps in K8S, is there a more general thing in systemd e.g LoadConfig
A more generic approach than LoadCredentials I think is the EnvironmentFile= directive if you want to pass along multiple env variables to your process without individual Environment= directives
It will be interesting to see if one day a replacement for systemd comes along and people who once championed systemd will begin to use the arguments the people who do not prefer systemd use to defend their choices for not wanting to use the next init system manager.
Part of the critique of systemd is the basic architectural choice of having this monolithic layer between regular user apps and the kernel. So, in a sense, the idea is _not_ to replace systemd with a better-written systemd, but to do things differently.
I might not be getting the point of the talk but I really appreciate the argument that Benno Rice presents.
Passing an arbitrary fd or socket from one process to another solves many problems and we are in the habit of doing it now.
It is probably unavoidable, looking at how complex modern compilers, processors and kernels are. They sure do make a lot of things simpler, though.
You will incur in lots of complexity trying to deal with init systems that aren't much better than traditional init.
Have you looked at s6? It’s a compelling alternative.
Churn isn't a virtue.
If there were any issues they would be worked on.
Things can be built to last, including software. That so many programmers today are not building such things (possibly they are incapabale) does not change fact that some did so in the past (whether intionally or not), and some still can.
Does everything you expect of single programmlet to manage your NTP, resolve.conf, DNS caching, mDNS, network devices, and etc.
Importantly, it weights only 1/100 of SystemD
> systemd provides ways to restrict the parts of the filesystem the service can see.
So like chroot and namespaces? Why do I have to depend on systemd when these are native features provided by Linux?
So systemd provides a friendlier abstraction of these concepts. Great, but so do Docker and Podman and many other tools that can actually be installed without taking over the rest of the system.
Having your application actually use systemd libraries further increases this dependency and makes it no longer usable but on a subset of Linux machines. This would be fine for some controlled production deployment, but is awful for usability and adoption.
Not like namespaces - using namespaces. And for the same reason we use other high level abstractions and high level languages rather than handcrafted assembly. You don't have to depend on it either - you can still use chroot instead of you want, but it's more work that way.
> Great, but so do Docker and Podman and many other tools that can actually be installed without taking over the rest of the system.
Docker installs a service which takes over lifecycle management, restarts, and traffic proxying for apps. It injects and managed multiple firewall chains. It pretty much takes over network management. And it's still stuck on the old cgroups format so it forces that on your system. It really doesn't win this comparison.
> Having your application actually use systemd libraries
You don't need them. Everything from the post is defined in simple environment variables. For example socket activation is maybe 3 extra lines when done from scratch.
How so? If I need filesystem isolation, I'll use the simplest tool that provides it. In this case probably a container runtime. Note that none of your criticism about Docker applies to Podman.
Why would I ever want to use containers with a tool that forces (OK, strongly suggests...) me to use it as an init system, logging system, network manager, DNS resolver, and whatever other aspect of my Linux system its authors think it should manage?
I apologize for retreading the same discussion on this topic, but like others mentioned, adopting an incredibly complex tool doesn't mean you're simplifying. You're just working at a higher level of abstraction, which can be comforting, but simplifying would be to use the underlying systems directly or using a tool that only focuses on a single aspect of what you need (i.e. containerization).
> You don't need them. Everything from the post is defined in simple environment variables. For example socket activation is maybe 3 extra lines when done from scratch.
Great, then the article shouldn't import systemd bindings... My point is that the program is now tied to systemd systems. Containers don't impose such restrictions.
While this is a good writeup, and you end up with a service, you still need to manage a machine with all risks involved - server reboots, updates, networking etc.
AWS Fargate, or the new App Runner will manage a container almost hassle-free