# All the code is wrapped in a main function that gets called at the
# bottom of the file, so that a truncated partial download doesn't end
# up executing half a script.Might be more than ten thousand, even, based on the reactions :)
{
code
}
That way, if the file is not fully loaded, the block will not end and the script will not parse #!/usr/bin/env bash
set -u
grep -wq '^# asfewdq42d3@asd$' $0
[ $? -ne 0 ] \
&& echo "script is not complete - re-download" \
&& exit 1
echo "script is complete"
# asfewdq42d3@asdWhen we download a script from a remote domain we don't trust, we have to validate its checksum against the known one; we can't leave that to the script, which we don't trust.
But yes, if you were downloading from an untrusted mirror you would want to check the signature or trusted hash before running the script at all.
#!/bin/bash
SHA512="485fe3502978ad95e99f865756fd64c729820d15884fc735039b76de1b5459d32f8fadd050b66daf80d929d1082ad8729620925fb434bb09455304a639c9bc87"
# This line and everything later gets SHA512'ed and put in the above line.
# To generate the sha512 simply: tail -n +3 [SCRIPTNAME].sh | sha512sum
check_sha512() {
# Compute the SHA512 hash of the script excluding the first two lines
local current_sha=$(tail -n +3 "$0" | sha512sum | awk '{print $1}')
# Compare the computed SHA512 hash with the predefined one
if [[ "$current_sha" != "$SHA512" ]]; then
echo "Error: SHA512 hash does not match!"
exit 1
fi
}
# Call the function to perform the hash check
check_sha512
# Rest of your script starts here
echo "Script execution continues..."
The idea is simple: if the first line get's mangled (#!/bin/bash) the script probably won't execute at all.
If the second line gets mangled than obviously the SHA512 comparison won't work (variable name or value).Finally if the rest of the script gets mangled or truncated it won't SDHA512 the same and it'll cause the function to exit.
For bonus points you can add a check if first line of script is exactly "#!/bin/bash" as well.
If the file is truncated in the middle of the word `check_sha512` it will try to execute a hopefully-not-existing command.
Wrapping in simple { braces } should fix this - if the brace is missing, you get a syntax error, and if present, you can execute the full thing, regardless of whether a trailing newline is available. This is admittedly bash-specific, so won't work for the linked script, but (subshell) doesn't cause too many problems
Using a function and checking the SHA don't really add anything after these fixes.
Checking the shebang is hostile to environments that install bash elsewhere.
An almost-working possibility would be:
exec some-interpreter -c 'commands' "$0" "$@" ""
which will fail if the second ' is missing. The child interpreter can then check for later truncation by checking that at least 2 arguments were passed and the last one is an empty string. However, this is still prone to truncation before the -c.How is it possible that there are ELEVEN different possible package managers that need to be supported by an installation script like this?
I can understand that some divergences in philosophical or concrete requirements could lead to two, three, or four opinionated varieties, but ELEVEN?
Does that mean that if I want to write an app that runs on Linux I should also be seeking to support 11 package managers? Or is there something unique about tailscale that would necessitate it?
edit: Thank you for the responses so far, but noone has yet answered the core question: WHY are there eleven of them?
Used GPT-3.5 to summarize these and tried to edit the response for brevity. Pardon any hallucinations here. Looks like it's mostly just different OSs all running their own software publishing/distribution portals. Lot of NIH maybe.
""" 1. apt: Debian, Ubuntu.
2. yum: Red Hat / replaced by DNF.
3. dnf (Dandified YUM): Red Hat, Fedora / successor to yum.
4. tdnf (Tiny DNF): lightweight DNF for minimalist distros.
5. zypper: SUSE/openSUSE.
6. pacman: Arch Linux, Manjaro.
7. pkg: FreeBSD.
8. apk: Alpine Linux.
9. xbps Void Linux.
10. emerge: Gentoo Linux.
11. appstore: Apple / iOS, macOS. """
You don't have to do anything, it's just about how convenient you want to make it.
Because Linux is "just" a kernel that happens to be used by different OS. Linux is just the program that runs binaries amongst your computer, it’s not a package manager or an OS.
There is no official cooperation between different Linux OS. People developing package management at Suse or Redhat (which are commercial companies) aren’t the same that are developing APT at Debian foundation. Odds are that they don’t even know each others.
Look, Android is based on Linux but they have their own package management because most of the existing ones weren’t compatible with what they wanted to achieve.
It’s the same with other vendors : while package managers looks like they are mostly doing the same thing, they all had their own requirements that justified their creation.
Anyway as a developer you mostly don’t have to package your app yourself. It’s the job of either the distribution developers themselves (in fact that’s most of the work in making a "distribution": they package and distribute software) or in some organizations like Suse or Arch, this job can also be done by the community which allows more up to date package.
OK different ones for different hardware devices, sure.
But I asked ChatGPT of the difference between openSUSE and Debian (since you mentioned those) and everything listed seems like it could be just variants within the same OS not a fundamentally different OS
> Package Management: > openSUSE uses the Zypper package manager and supports the RPM package format. > Debian uses the APT (Advanced Package Tool) package manager and supports the DEB package format.
This comes back to my original question - they need different package managers because the OS's are different, the OS's aren't different because they wanted different package managers
> Release Model: > openSUSE typically follows a fixed release model with regular releases of a stable version, like openSUSE Leap, which has a predictable release cycle. > Debian follows a more flexible "when it's ready" release model. There are three main branches: stable, testing, and unstable.
you could have unstable, testing, stable, and then on top of that have a predictable stable release cycle. These aren't incompatible with each other.
> Philosophy: > openSUSE is known for its strong integration with the open-source community and its adherence to the principles of the Free Software Movement. It emphasizes stability and ease of use. > Debian is known for its commitment to free software principles, as outlined in the Debian Free Software Guidelines (DFSG). It prioritizes stability, security, and freedom.
This just feels like a ChatGPT hallucination. It sounds like both are focused on Free Software. That said, I suppose noone is stopped from creating a Linux OS extremely focused on FSM, a commercial one not-at-all interested in FSM, and one somewhere in between.
> Default Desktop Environment: > openSUSE offers various desktop environments, including KDE Plasma and GNOME, but its default desktop environment may vary depending on the edition (Leap or Tumbleweed). > Debian offers a wide range of desktop environments, including GNOME, KDE Plasma, Xfce, LXQt, and more. Its default desktop environment is GNOME.
Like the package manager, inverted cause and effect.
> Community and Support: > Both distributions have active and supportive communities, offering forums, mailing lists, documentation, and other resources for users seeking help or guidance.
> System Configuration: > openSUSE uses YaST (Yet Another Setup Tool), a comprehensive system configuration tool that allows users to manage various aspects of the system through a graphical interface. > Debian relies more on manual configuration files and command-line tools for system administration, although there are also some graphical tools available.
I wonder what YaST uses underneath, I bet it's a series of...configuration files :)
No reason why both couldn't work on the same OS.
The rest should be done by the distros' maintainers.
It's probably easier to script up the installation via package manager. You also get the benefit of upgrades along with the rest of the system.
Furthermore, I haven't seen a single instance of Flatpak being used to install applications on headless servers.
I also don't know many sysadmins who would be happy that each application they install in their servers will come with a full set of dependencies rather than being dynamically linked to the base system.
Why not? This kind of script is a generally bad idea, because it's hoarding the responsibility of package maintenance. The better solution is to maintain a working package (better yet, convince someone to maintain it for you) for each distro's public repository, good documentation on their wiki, and working links to that documentation in your readme.
Why not not? Despite being a bad idea, it's not a hard idea. The implementation of this script is likely easier to manage than doing it the proper way.
--
Now if you really want to feel upset, start looking at build systems...
Sure, there is the linux kernel in different version and patch states, but everything else including on how to manage software(package manager) is something the distribution decides.
As there is no standard and for historic reasons, different distributions choose different package managers.
If you want to support linux you normally decide on which distribution you want to support and more importantly which version of them.
The big ones out there are probably ubuntu, fedora and arch.
Then you can decide between building packages for the different package managers or just build a static/dynamic binary that works on the distros and runs on them.
You can also use flatpack and snap which makes it easier to support different versions of the same distribution, but you run in a sandbox and afaik lower level access to the graphic stack(games) is a mess.
Yeah it is a mess, but at least most distributions have the same service/bootup manager
The problem is that the script could be truncated in such a way that it executes successfully. It defines a bunch of functions and then quits.
If you're not checking for the success or failure of the download, you're probably not checking for the success or failure of the script; something is just going to assume the script worked.
Edit: cue the HN responses to use nix, and other solutions
Solaris handled this by having a config file that you could specify which files to copy between boot environments. Don't have /var/mail,/etc/passwd,/etc/shadow,/etc/hosts,.... in a different FS ? Better remember to copy it, or you lose your emails,users,hosts,etc
The problem with this kind of failure mode is that it's silent and generally irreversible. There generally aren't tools to MERGE files that conflicted later on if you discover the issue while the old boot environment snapshot is still around.
On the other hand, this is sort of how Docker (and Solaris zones) work -- and why you can't upgrade a container in a container-like way, you must replace the entire container (i.e., any upper-COW is lost, unless you export it and build something new atop it outside of any sort of reconciled process).
On the other other hand, I've actually used BtrFS snapshots for exactly this successfully in exactly 1 case -- a read-only volume containing my OS files (12 files total; Kernel, Kernel Debug, Several Initramfs) for PXELINUX/EXTLINUX booting, for which can be atomically upgraded. Since it was read-only and the only operation supported on that volume was replacing the files, and it only had those 12 files, it was safe to do so.
Just like security through open source, it's more a nice myth than a reality.
But security!
Basically, if you believe that code signing is a good thing (and I hope we all can agree on that), curl to shell is not great security practice.
Unless you're read to fully analyze all the code and look for problems in the whole code, instead of just the install script, don't be an eager early adopter of every project you see posted somewhere, wait for it to have some social validation, give it sometime so smarter people with more time than us had their time looking for vulnerabilities in the whole code, not only on the script. Or if you really want to check it, use an isolated VM first.
curl -fsSL https://ollama.com/install.sh | sh && ollama run llama3 "Why it is bad to curl | sh?"
...for details.