An installer isn't simply there to copy a program to your system. It's there to copy files to your system and then modify your system so that it is ready to use the new program to the deepest level that makes sense. You're not supposed to need to do any other configuration of your system for this program after the installer finishes in order to properly use it. This includes things like associating file types with this program, changing system settings to make it default in various places (hopefully with some kind of flag, to be fair), discovering and associating hardware or any other step like that.
Note that piping curl to bash or running bash on the output of curl/wget is a minor point quickly glossed over in the article, which is actually complaining much more about using custom installation scripts that do "too much".
I think that's the main reason I'm reluctant to run curl|bash-ware. I might trust the authors not to be malicious, but I generally wouldn't trust them to be competent at cleaning up after themselves.
However, having a package that simply installs some files and then tells you "copy these lines to your .bashrc and modify this mount file and [...]" to set up your system is really not that much better - if you follow those instructions, it will be up to you to manually un-follow them if you later decide to stop using this package. And while whoever wrote the installer may or may not properly undo what what they installed in the uninstaller, I can guarantee that no one will provide an uninstaller which un-does changes you manually made.
cc: Anyone working on macOS
It's possible for installers to be this full-featured, but it's plagued with footguns.
They try and often fail to make an idempotent shellrc patcher with command line pipes, or something equally convoluted. If at all.
Instead of using something established, or even better... drop-in config directories.
Not everything needs to own (or even touch) the main config file!
edit: Don't trust the 'curl the script before piping it' thing, either.
I don't have it handy, but there's been demonstrated means to alter the content based on timing, causing the pipe to be a malicious payload
Even workse if app starts with sudo to install some stuff in system directories
Sure, but what's the corresponding step in the zsh "install"? Looks like copying over their ~/.zshrc. The "install" script could have chosen to clone the git repo, copy the file, then print "all done! run 'zsh' to start your new shell!' or whatever
This is the author missing the point. The reason `curl | bash` is common is because devs don't like packaging for every distro under the sun, and MacOS, and FreeBSD, and... If you really think `curl | bash` is the problem, then you should be lining up to package the stuff you use for your distro. Instead, it is always someone else's problem.
Package managers are great... for the user. For everyone else, a polyglot system, with arcane technical policies, and even more arcane human policies is... not ideal.
> devs don't like packaging for every distro under the sun
source .deb will generally works fine under Ubuntu, Debian and most flavours just fine unless you have some funky dependencies (and if you do, installer will also be complex)
RHEL/Centos RPM will cover near-whole rest of the market.
MacOS/FreeBSD will be different enough anyway that you will need to write a bunch more in install script
Building simple package that just delivers binary is not even that complicated. Getting them to pass distro muster is often harder but you don't need to do that.
> Package managers are great... for the user. For everyone else, a polyglot system, with arcane technical policies, and even more arcane human policies is... not ideal.
Most of those "arcane" policies are there so random incompetent dev won't fuck up other stuff in the system. Which is also why users want packages in the first place.
And you don't need to abide to any policy to make simple package that just puts your files on the system. Building dumb debian package is just few metadata files (post/preinst scripts + a file describing your package) in a directory and a single command.
You only need to worry about policies if you want to submit package to distro, and there are in place for a reason.
No shit they are better for users, that's their entire fucking point!
I'm not even sure we disagree about anything, and you're yelling? FWIW I package and distribute my software but for a long time I didn't.
My position is very simple -- it's always easier if someone else does the work for you. If someone chooses not to distribute with a package, that's fine. If it bothers you, the choice is to build a package spec and pipeline for that project, not to moan about it. But packages are not an entitlement of a non-paying user. That user is perfectly entitled not to use your software, because complaining about the packages you may or may not have available is stupid, and not the devs problem.
You write the spec files for the managers of choice; DEB, RPM, PKGBUILD, whatever
With that you parameterize the inputs. The version to build, where to get the sources, etc.
Maintaining these is... note your build/runtime requirements the same way you do while developing.
Once the specs are written the laborious work is finished. There are countless tools to make this less effort, eg: pyp2rpm and alien
I maintain packages through Fedora COPR, a similar system. These tools are my first pass at writing the spec for things I don't even own.
I practice what I'm preaching, and I really don't buy that it's a lot of effort. If you want users, do it.
This is a critical first step to being bundled in the distribution itself. You won't get maintainers if there's nothing to maintain.
Clowns at Red Hat do like to break manifest compatibility in the worst way tho, think "a macro with same name in new version now does something else". The idea of .spec file being whole manifest is... nice in theory, not in Red Hat execution. But then last time I did any for RHEL was at RH6/7 time, maybe it's better now...
But even in that case that's fixing few minor things every 3-5 years at worst. There is no excuse to not make your packages if you're actual serious developer, not some random hobbyist.
I do give a pass for apps that run as single binary as that while suboptimal is at least easy to work around.
I've heard a lot about these systems, and, if they do what they promise, I think this is great. Exactly what is needed. I already do package and distribute my software. My comments are mostly directed at those who have a problem with those who don't, because that's a fine choice too. It can also be a fine choice for awhile. The problem is mostly one of attitude, we need less user entitlement re: packages. Packages are something I will get to if I want to, when I have the time, and if it interests me.
I would note there are other problems with the package manager ecosystem which make it ill-suited to packaging Rust apps, for instance. I am not an Arch user, but Arch really is leading the way here: https://wiki.archlinux.org/title/Rust_package_guidelines
Of course, you can run your own APT or RPM repo, but that actually asks your users for even more control of their systems (since you now have a way to get any new version more or less automatically installed on their system in perpetuity), and it makes it even harder for them to install your software.
You can also bundle your software as a .deb + .rpm + .[...] file instead of a .sh file, but there is really not that much difference.
> Of course, you can run your own APT or RPM repo, but that actually asks your users for even more control of their systems (since you now have a way to get any new version more or less automatically installed on their system in perpetuity), and it makes it even harder for them to install your software.
Technically incorrect as you can put (at least in apt) filters on what's allowed from the repo
Also, you want to have your cake and eat it too.
Either there is some 3rd party to look at package quality (even if "quality" in this case is "does not fuck up other stuff" and "uninstalls properly"),
or you trust developer to not fuck you over when they update the package
or you don't and only install raw .deb file.
> You can also bundle your software as a .deb + .rpm + .[...] file instead of a .sh file, but there is really not that much difference.
Main difference is that package can require system deps (so you will be sure that they are not removed on accident) and you don't need to write uninstaller for it.
Or you can support one of the limited-capability package ecosystems, like Flatpak or Snap or AppImage. Then it's sort of like installing an app on your Android phone: the installed app (and its updates) only get to interact with the system through a customized sandbox that must first be described to, and accepted by, the user.
I mean, no, those aren't the only two solutions. As with accessibility requirements for public-access shops, "we the users" could mandate (either through legislation, or more interestingly, through some kind of medical/legal-like software-engineering bar association) that devs either properly package their apps, or don't release them at all.
I'm not saying it's a good idea; just saying that it's probably the solution brewing in the back of the minds of people who write things like this.
There's still no widely accepted answer for what 'properly package their apps' looks like. You could want snaps or appimages or a flatpak, or rpms or debs or docker containers or nix flakes or cargo crates or python virtual environments or jars or javawebstarts or portable windows executables or windows msis or webasm packages or web pages ... there's an impossible profusion that it's currently unreasonable to expect devs (many of whom are working for free) to support more than a couple of.
If there were a single solution that worked in almost all cases and platforms, didn't put unreasonable constraints on packagers and was widely adopted, then it might be somewhat reasonable to expect devs to support it, but that's far from the world we live in. The closest to that (despite its real problems) is probably piping a curl to a bash script...
If someone were to impose restrictions to the hobby things I do, then those hobby things will just become private repositories that will never see the light of day, and I'm sure that rings true for most people.
Yuck. As I said:
> Instead, it is always someone else's problem.
"I am the customer mentality" has been with FOSS and Linux for awhile, but one wishes people might shake themselves out of their stupor for 2 seconds to realize: "You're getting all this stuff for free." Instead, every user wants to man the battlements on Reddit and tell devs how to do the thing.
Re: the parent, she/he shouldn't be downvoted for making an unpopular yet interesting point.
While I don't advocate for piping curl to bash, this is exactly what I expect an installer to do. It should provide sane defaults that don't require me to fiddle around with manpages or other documentation and config files before I can even use the thing. I'd say that's even the standard for most software. Now, I might compromise on the installer telling me what command I need to enter to get a default configuration/setup integrated instead of doing it automatically, but I have too much shit to do to waste it on configuring the nth thing I've installed this week.
I think what's missing is some standardization around what an installer is allowed to do and flags to tell it when to make certain changes as well as explicit logging for what exactly was changed or added where, but that's not going to be solved if everybody has their own bespoke bash script for installation.
I don't mean to be facetious, genuinely curios, but to the authors point, isn't that the point of the package management system? It's a standardized and encapsulated way to provide software, with sane defaults, in an auditable way, that respects the users system.
I tend to agree with the author here, but I'm sympathetic to the maintainer: packaging for rpm/deb (in my experience) can sometimes be an enormous pain with many hoops to jump through, _especially_ if you're trying to get your package accepted upstream. With that said, it is a mature, standardized process, and is fairly painless in the self-hosted/non-upstreamed case.
I would say this is the primary pain point for maintainers, which is why we're suddenly seeing bash scripts instead. The technical complexity and the process of upstreaming and then doing this for a bunch of different distros. If they're already rejecting package management systems because of their current state, the solution isn't "well, why don't they just use existing package management systems".
Devs who use curl | bash do that because they are control freaks and want and love to make mess in user's homedirs, that is all. This is a an ego issue, not a technical one.
YOU ARE RIGHT!
What's more, we can make common installer code so everyone could just use same rules and documentation to customize it to their liking.
And once we have many apps using it might even be integrated into system so you don't have to download as much, and all the bugfixes are in one place so we don't have thousand different install scripts
Once we have that we can just make a very simple data file format, say just a data.tar.gz for data and control.tar.gz for telling installer what to do. As for metadata, just write simple file, say
Package: my-tools
Version: 0.0.1
Section: base
Priority: optional
Architecture: all
Maintainer: Someuser <some@email>
If you need to run something after installer just put it in control.tar.gz as postinst script and it will do thatAnd if that's one format, we can it manage uninstalls too! Just put a simple text database of those files.
OH WAIT THAT'S A FUCKING DEBIAN PACKAGE MANAGER
curl ... | bash
is the moral equivalent of {npm, pip, nuget, ...} install
and i really don't understand the folderol around that. In both cases, you can alter the command slightly to instead download the payload without executing it and inspect it first, if you wish. In both cases, you're ultimately going to either audit and then execute or just execute code from Somewhere Else.This is true for distro package managers too, though you could argue that sometimes but not always (ppas, community/, whatever) a distro package manager is an extra layer of insulation between you and nasty stuff.
For reasons such as these, but also things like telemetry configuration defaults and clean uninstalling, I prefer using a package manager. In a way independent package maintainers balance out the power of upstream developers over end users. They embody “you can just change it if you don’t like it” for the regular user.
Unfortunately, there just isn't a way to square the circle of "this has to work for everybody" and " this shouldn't take 300 lines to ensure a directory exists".
Piping from the internet into your shell is a bad idea.
https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...
But is it really that much worse than `pip install` random stuff? Or Homebrew, or Linux package managers - it's not like these things get audited.
APT is part of the core Debian distro. I don't know about "audited", but it's tested, and it's maintained. And the packages in "main" are also tested.
I don't program in Python, and I don't know how Pip packages are audited. An awful lot of the packages in Debian main are Python and Ruby libraries, and I suspect that they are rarely used: I assume most Python and Ruby users rely on their own language-specific package manager.
I also regret the arrival of distro-agnostic package managers like Flatpak. But that's fine; I understand why developers use them, and I'm not going to rag on them for that decision. I'm just much less likely to install them.
Because a lot of devs have never heard of it? I'm a linux app dev of <10 years and I've never heard of xdg until this post. I just assumed dotfiles in the home directory were still the de facto standard...
And it works! For people who have to stay on top of language changes (not everyone), who are on a wonky platform or that doesn't update quickly enough, this is actually a pretty okay method. And they do also offer alternative methods: https://rust-lang.github.io/rustup/installation/other.html
Here, the "You know what they should be doing..." attitude surely doesn't account for something.
> rustc changes too fast for anything but the most rolling of distros to keep up
This may have once been true, but isn't really anymore. Ubuntu updates all distros with the latest and greatest and does so about every couple of minor version releases (last was 1.59 to 1.61). This is fine for me. My MSRV is now whatever Ubuntu is shipping.
Or anyone that wants to try out the latest "$something but in rust" and tries to compile it.
>who are on a wonky platform or that doesn't update quickly enough,
ie, the vast majority of people so I question the word "wonky" here.
>This may have once been true, but isn't really anymore.
It's still true in my experience. But then again, Rust does change really fast. Maybe they fixed their entire bleeding edge demographic in the last couple weeks and now people refrain from using $latest features that don't work in rustc from 3 months ago.
Case in point: I work in web hosting. Yesterday a customer came to me asking for root access to the node so they could run an installer for something. No. But they had already tried running it as their user. And everything in their user account was gone. Why?
Because the installer expected to run as root, and its variables couldn't be defined properly and so when it went to clean up after itself, it did
rm -rf ~/$variable/
and since the variable was unassigned, that became rm -rf ~/
I might not have it exactly right, but that's what the effect was. Piping curl to bash is asking a lot of somebody who doesn't know what they're doing, and should raise the hackles of somebody who does. At the very least, download and view the script yourself before running it.Because there are Linux developers who never heard of XDG and just put their stuff wherever. And since ignoring XDG doesn't makes your application completely unusable, they have pretty much zero incentive to learn about it. Crazy world, isn't it?
Perhaps if XDG were cut loose from the Free Desktop project, more developers and maintainers would pay more attention to it.
And indeed, there are lots of tremendously popular apps out there (Slack, for instance) that use e.g. $HOME/Downloads as a default download directory instead of $(xdg-user-dir DOWNLOAD), and most users don't mind.
For something like the mentioned oh-my-zsh, it can be safely assumed the user is not a novice in most cases. Having to install in this manner may in fact deter the user, as they'd be suspicious. A well written README would be the better route.
I seem to recall a case of a certain application that uses curl to bash to install docker, docker-compose and finally create its containers. The problem lies with the fact that said script committed the mistakes of trying to pull docker from Docker repositories instead of using the one from the distro and also thinking $distro_based_on_ubuntu (I think it was Mint) is Ubuntu. A mess was made and I had to help some guy to fix it.
I wouldn't trust a shipping shell script with less than 200 lines just re: sanity checks.
Large shell script programming stinks. The person who wrote it probably swore off shell as soon as they were done. But it is portable and it isn't half the pain that packaging for several distros is.
Then when I really like the software and want to install it on the 'real' system, if there's much benefit in doing so, I spend more time and effort understanding the script. More often than not, I end up not doing this because there is no compelling need to install the software on the 'real' system.
I'm not sure how package managers prevent this sort of issue, but in general running shell scripts as root (and it probably needs to run as root) is a bad thing.
As always, you need to trust the vendor of software you install and/or do an audit of the source/installer regardless.
On the other hand, It's often the case that your machine is running scripts that have been fetched online via apt and the like and it's definitely something to consider especially with all the hacks that have been happening in the last few years and the undisclosed vulnerabilities available in the wild.
You do not take liberties with someone else's system, there is no need to do it and no excuse for it. You can have a reference example "make install" in your build system that serves as a reference for the packagers without you having to worry about all the 80 different distros. And it better also have a "make uninstall".
Respecting the possibility that a config file or even the bins and libs might already exist as part of the "make install", are just part of the job like writing the software itself, not some unreasonable extra burden.
If you're that much of a baby then I do not want your 'free' gift software and nor should anyone else. What other corners are you cutting everywhere else in the software? What other gross lack of integrity do you think is ok?
Maybe this is more the result of turning every random application into it's own cpntainer. It's fine to have an app installer configure the entire system to suit itself when the entire system is just the container to house the app.
The whole point of the oh-my-zsh installation script is to modify your system to work with oh-my-zsh. If you don't want your system modified, you shouldn't run it: there is no other point of that script.
Build instructions are a completely separate thing, and are a complete distraction. No one sane waits for some random distro to discover your software and decide to package it themselves as a means of distributing it.
As far as most people are concerned, the role of things like apt or rpm is to manage the base system. Installing and keeping application software up to date is best left to the applications themselves - as it has always been on Windows or MacOS (before the app store craze), as it should be. It is not and should not be up to the Debian maintainers to tell me what version of Firefox to use, or how often I should update it.
Edit:
> Respecting the possibility that a config file or even the bins and libs might already exist as part of the "make install", are just part of the job like writing the software itself, not some unreasonable extra burden.
I assume you are referring to the author's complaint about the installer overriding their ~/.zshrc. If so, then that is again a misunderstanding of the point of this script - it explicitly tells you right in the description that it will do that AND it keeps the old file around in case you still need it.
To explain again - oh-my-zsh is a system for controlling your zsh installation. It's whole purpose is to take over things like your .zhsrc file. This is explained very clearly on their main page, so running that script and expecting it to not modify your zsh settings is like installing Firefox and expecting it not to connect to the Internet when you type a URL in the address bar.
Similar assumptions and liberties are more and more common, changing all manner of system-wide default behavior not just a user's own configs, sometimes even in direct conflict with other software that wants to make is own system-wide config such that you nominally couldn't have both things at the same time. Whichever you installed 2nd would work and break the other. While in reality neither one actually needed to make such assumptions or break anything else, could coexist fine, it was just grossly and inexcusably inconsiderate installers and directions.
From the user side, the package manager often doesn't do what I want, either. I could install Node (as an example) via `apt` or `yum`, and end up with a Node installed in a root location. Now I'm in a mess. Or I could use a install script, or even yet another 3rd party solution such as `npm` to do what I actually want: Node installed for me. ...of course, I just mentioned a whole other can of worms: All the "other" package managers out there.
TL;DR: KISS often is the best solution.
And you'll quickly see why projects say fsck it -- we support installation via curl | bash. go and package it yourself it you want to.
It really highlights the need for a broadly adopted "homebrew for linux" type package manager that could safely manage software without conflicting with OS packages.
As someone who uses Homebrew for Linux, I can say that "without conflicting with OS packages" cuts both ways: fine, I get more modern stuff than apt could imagine, but having to monkey with the LD_LIBRARY_PATH or -Wl,-rpath over and over gets old real fast. I have no idea why they tried to be so cute putting things in a stupid directory (/home/linuxbrew/.linuxbrew) instead of /usr/local like they did with Homebrew for Mac (err, not the arm64 version, where they went back to /opt/homebrew for whoknowswhy)
It reduces the user to nothing more than an endured, pseudotrutworthy ball of lard in the developers formal equation of installation. As it is an installation however it beseeches the administrator, the root, the owner and the light of this system that it may achieve its purpose and in doing so is a blasphemy. it supplants my GNUlike will and in its stead enforces the hopes and dreams of nothing more than a transient, a visitor.
For the last time: The ground your code touches is holy and hallowed. the rites of Posix and the decree of the Unix philosophy at the sides of the throne you approach alone implores you not to speak unless spoken to, unless absolutely in the favor of the god of this land. To sudo curl|/bin/bash is to commit an unspeakable treason in the divine presence, a sin unforgiveable before the light of the PTY and the TTY. To take the sudo sword of the emperor alone and wield it as you see fit is damnable contempt indeed.
Doing something called "piping curl to bash" is not a good idea. It's like trying to do something on someone else's computer without asking permission first. It puts you in the place of being in charge, even though it isn't your computer. Doing this can cause trouble and make the person who owns the computer very unhappy with you.
> OK, now explain the above in verses of King James English.
Behold, the ground thy code doth touch,
Is holy and hallowed much.
The rites of Posix and decree,
Beseech thee not to speak unless spoken free.
Alone thou approachest the throne divine,
Unless in favor of God's law thine.
To wield the sudo sword alone is sin most dire,
And damnable contempt before heaven's fire.
Forsooth 'tis treason unspeakable this day!
curl|/bin/bash shalt thou never say!To avoid regret their launch, I enabled sudo to ask me a password, to avoid some sudo malicious command in the wild destroying my box or wiping my nas drives...who knows? :)