Even if Debian wasn't perfect before systemd was introduced, at least I knew there was a very high probability that I could trust it to function well.
That stopped being the case after systemd was introduced. I've had far too many problems caused by systemd, to the point where all trust I had in newer versions of Debian has been lost.
Initially, I thought that maybe the problem was with me. But as I investigated the issues I was having with systemd, I'd see so many other bug reports, forum postings, mailing list postings, IRC logs, blog articles, and other online communications from people who were also having many other problems with systemd.
Debian offered a much better user experience before it switched to systemd, and a much worse user experience since.
Any specific issues? I didn’t see any. No offense. One factor may be that Arch prioritizes not patching upstream - helped save them from targeting here, and it doesn’t go overboard with default configs, which I’ve long appreciated.
Not to distro-war, I’m very grateful for Debian. My background is finding Linux in the mid-00s and breaking many SuSE, Ubuntu, and one or two Debian systems before finding something I could understand, repair, and maintain in 2008 Arch.
systemd accentuated its ability to stay relevant with enterprise Linux, made it even easier to package for, and has been a useful tool in diagnosing service issues and managing bad software for me.
I’m not sure how often it’s posted here but Benno Rice formerly of FreeBSD Core Team has an excellent and amusing discussion of systemd’s technical merits.
Magic, indeeded.
That is not to say systemd isn't a hypertrophied pig, but it does do important work.
What could be done to prevent supply chain attacks more broadly?
But libsystemd is not linked to xz only. By removing it, sshd is free of many other potential risks.
Still the question remains: what technology could be implemented to mitigate this type of attack (beyond sshd)?
For example, Linux sandboxing is poor, and SeLinux is not usually enforced.
For example, dropping the libsystemd dependency.
The UNIX philosophy was right all along - each tool does one simple thing.
Most things we want to do are necessarily complex, so dogmatically adhering to "one simple thing" necessarily drives you towards a towering heap of composed dependencies.
If you want to get rid of the supply chain, you want everything specifically to be non-composable, so that everything has to be reinvented from scratch for that specific solution.
But this implies removing dependencies on various libraries, and keep a such important process small is already relevant, despite the fact that it will load PAM libraries, making it quite still prone to issues.
Same as the suckless people. They were right after all.
That's a straw man. Nobody was ever pushing for people to import all of libsystemd just to use the communication protocol with systemd, that protocol was designed to be very easy and simple to implement on your own precisely so you didn't have to depend on anything else, libsystemd just happened to provide an implementation too, and somebody was lazy and imported that instead, but I seriously doubt that's what anyone thought was best practice.
Contrary to popular belief, systemd isn't about linking gigantic binaries and libraries together into a giant blob like everyone things (e.g. the standard nonsense line from detractors that e.g. `systemd-resolved` is "part of systemd" as in "part of the same binary", which it isn't), but about just letting programs talk to each other, so that you can get reliable, featureful integration on your desktop instead of everything being a half-working mess of shims and ad hoc communication, and providing a centralized service that can consistently and from first principles solve certain tasks, so that you don't have to have every single daemon reimplementing their own, or a central implementation that's a big pile of preprocessed shell scripts, spinlocks, edge cases, and bullshit.
> Same as the suckless people. They were right after all.
Right about what? Right in myopically judging software quality by "lines of code" and setting nonsensical arbitrary line limits, and as a consequence confusing a (poor) map for the territory, because while, yes, "few lines of code" can make software good in some ways (less buggy, etc), it can also make it quite bad in others (less featureful, doesn't handle edge cases, brittle, annoying to use), or just be completely unrelated? Or finding every possible way to vrware software that most definitely "sucks more" for the vast majority of users that don't randomly happen to perfectly align with it, software that sucks so bad people have to maintain patch lists to make it useable, with all the inherent problems with maintainability and stability over time patches incur?
Suckless is a cargo cult of tradition following a fundamentalist interpretation of its holy texts, refusing to innovate and actually make computing better, stubbornly sticking to a model left unchanged since the 70s, which wasn't even that great back then, and proud of it.
Many other parts are terrible beyond measure: journald for example.
I think the concept of on-demand processes managed by the end manager is a good idea, but systemd is strong arming the services into accepting its philosophy.
I have nothing against the service management stack also addressing common principles like logging and on-demand starts a la inetd, but the notion that applications should link against a component of the service manager which is also used by the service manager boggles my tiny mind.
Poettering was a Microsoft fan. I guess he designed SystemD like svchost.
The only evidence I can find of this... Is another comment from you. (Googled: '"MSFT_PRIVATE" partition'. One result.)