While my experience on Snaps was mostly negative (due to bugs, virtual "loop disks" per every app affecting the system performance, etc.), I found that Flatpak finally lets me install essentially any app on any distro.
For example, the Elementary OS "AppCenter" apps are now available in any other distribution, thanks to the Flatpak remote. Testing daily GNOME apps has become as easy as installing their Flatpak reference, and letting them update automatically.
Regarding memory, storage and power consumption:
- The base runtimes (which are the heaviest bit) are downloaded only once per system, e.g., one for the KDE Plasma ecosystem, one for Elementary, and so on. The first app you install will pull them, so I do not see it as much more bloaty than having an enormous bundle of system-wide dependencies (e.g. if you install a KDE app in a primarily GNOME environment) as would happen otherwise.
- Memory and CPU-wise, Flatpaks are very light containers (which do not require loop disks, or anything else) which should have almost no overhead. I never witnessed any loss of performance, at least.
- Bug and crash-wise, I experienced a tremendously stable and "flat" experience on Flatpaks. That is, if there is a bug in one app on one distribution, the bug will exist on all, or vice versa. This is not as common as it should be in containerized app install tools, and makes debugging overall much easier.
The only drawback are updates taking longer than on other platforms, probably because compressed deltas are not yet available unlike in other major package managers.
With this, I don't want to describe Flatpak as a panacea in everything, but at least for GUI apps, it solved a lot of distribution fragmentation issues in my case.
Adding to that: it's really worth noting, on "Memory Usage, Startup Time," the author of this article went straight into Snap ("the slowest of all") and doesn't mention Flatpak once. Could it be … it isn't a big deal? Nah, let's move on, Flatpak bad.
They also conveniently use an old version of GNOME Software which predates the recent rework of how it displays permissions, age ratings, and other details. In Fedora 35, GIMP most definitely has a worrisome "Unsafe" message. It's very shiny and new so it would be excusable to miss that if you weren't pretending to be a well-researched hatchet job.
- Security: you are trusting random developers on the Internet to handle security of all the dependencies in a flatpack, forever. Most do not provide security updates at all.
- Privacy: distros like Debian spot and patch out trackers, telemetries and similar things.
- Long term maintenability: your desktop becomes a random mashup of applications with increasing complexity that you have to maintain yourself.
- Licensing issues: distros review licenses (and find plenty of copyright violations while doing so). A flatpak puts you or your company at risk.
- Impact on the ecosystem: the more users switch to opaque blobs the less testing and review is done for proper packages. The whole ecosystem become more vulnerable to supply chain attacks.
As opposed to what? Dynamic linking does not solve this problem, contrary to popular belief.
>- Long term maintenability: your desktop becomes a random mashup of applications with increasing complexity that you have to maintain yourself.
Again, as opposed to what?
>- Licensing issues: distros review licenses (and find plenty of copyright violations while doing so). A flatpak puts you or your company at risk.
Sounds like a good way for a proprietary software to be distributed...
>- Impact on the ecosystem: the more users switch to opaque blobs the less testing and review is done for proper packages. The whole ecosystem become more vulnerable to supply chain attacks.
I don't see how one follows the other.
In fact, the lack of interdependence between apps makes this considerably easier, because a Debian upstream that sits in the middle and custom-compiles it's "official" software can release security updates immediately for critical apps with critical security flaws -- without waiting to make sure that the security fixes don't break some other app.
Flatpak does not require you to have a separate upstream for every app, or to get your updates straight from the developer. Debian can still be a middleperson and they can still do all of the same moderation/customization/legal analysis.
----
Very importantly, on security, Flatpak is a paradigm shift away from the existing model of needing complete trust for the entire app:
> Security: you are trusting random developers on the Internet to handle security of all the dependencies in a flatpack, forever. Most do not provide security updates at all.
A big part of this is that you don't trust random developers with Flatpak, or even your own distro maintainers. Most applications do not need the level of access they have by default. The end goal of Flatpak (it is debatable whether it is achieving this, but the goal at least) is to significantly shrink the attack surface of your app.
If your calculator app doesn't have file access, or network access, or process forking, or any of these things because the runtime blocks them (and honestly, why on earth would it ever need those things), then it is a lot less dangerous for your dependencies to go out of date. A calculator should not need to worry about updating its dependencies, because it should not have access to anything that could be abused.
Now, that's an extreme example. Many apps will not be in the position of needing no privileges, but many of them will still be able to have their attack surfaces shrunk. Firefox for example really shouldn't have that much access to my filesystem. Apps should not (in general) share parts of the filesystem except in user-defined contexts. Many of them don't need Internet access at all.
Flatpak makes it easier for dependencies to go out of date, but it also (ideally) drastically reduces the number of potential security flaws you can have in the first place, and drastically reduces the ability of apps to exfiltrate data from other apps (again, this is the ideal, see X11 vulnerabilities + Wayland for how much of an ongoing process fixing Linux security is).
----
I would question a few things:
- Is the reduction in attack surface big enough to balance out the extra attention users/devs need to pay to updating dependencies?
- How many security vulnerabilities are due to dependency issues vs data exfiltration from the home folder, or from other apps that they have no need to access? Linux security is kind of a disaster in this area, how many vulnerabilities would we fix immediately just by sandboxing filesystem access so /home wasn't a free for all?
- Is this actually blocking maintainers from releasing security patches or making it harder for them to do so? I would argue no, I think the maintainers can do the exact same things they're doing today, and that their job may be even easier when they don't need to hold up an entire house of cards with every release.
- Is it better to trust Debian to try and patch out every tracker/telemetry, or is there an improvement to having apps that don't require Internet access just be flat-out unable to call home or send telemetry in the first place? I don't think this blocks maintainers from doing their jobs, and it means I just don't have to worry about trackers at all in my calculator app, even if I download it from a 3rd-party source.
----
The weak point here is other Linux vulnerabilities (gotta get off X11!), UX (an eternally hard problem to solve), and Flatpak immaturity/mistakes (I don't like that manifests are still a thing, portals are still being built, I think permissions could be better). But the fundamental concept here isn't bad. People rely on shared libraries/runtimes for a lot of things that they don't need shared runtimes to get.
And I can't stress enough: distros can still moderate and compile custom versions of Flatpak apps.
However the conclusion still holds imo, both are bad.Not necessarily because they haven't partially solved what they tried to achieve, but because they are being pushed instead of provided as alternatives to good-old package managers or binaries.
Though technically older than both, AppImage is more recent in adoption compared to both Flatpak & Snap, and yet 95% of the time i had absolutely no problem using AppImage(with the small caveat that desktop integration is less polished, and i use a neat tool called AppImageLauncher to 'install' them). I know it's an apples to oranges comparison and AppImage is not a package manager, but the point still stands.
It's a pain in the ass packaging apps for each distro, but the value of a single command to update libraries and applications on millions of Linux systems is worth it, no?
Flatpack is, more or less, just a bad package management.. technology.
From what I have seen, most of the opposition to Flatpak comes from the same place that the one to systemd: fear of change. Despite being centered around technology, a very vocal part of the Linux community seems to be extremely conservative.
The only thing I think actually solves packaging is nix.
My installs of Signal and Firefox are with Flatpaks and GNOME Software transparently handles updating both.
Good things require always hard work, there is no magic solution. I appreciate API and ABI stability and slick package management of Linux distributions. The work of developers and package maintainers make this possible. This is the reason why Linux is efficient and comfortable. This is not a magic solution but work. And it makes efficient distributions possible.
The grass is green on the other side?
On Windows you have to install all automatic updates or new applications will not run. If you don't do this you will face errors about missing C#-Runtimes or a C++ Redistributable. And application developers need to ship dependencies within the application. This is the reason why Windows is so fat, why applications are fat and the usage of a permission system is not possible. The article doesn't states this.
MacOS on the other hand? The break compatibility every few years. MacOS Classic to MacOS X, Quartz, PPC, Intel and finally M1.
Android and iOS had the luck to be new. Permissions work, if they aren't requested all. But still, iOS fails to provide file-system access, file handling is burden for users and backups are impossible. Apps haven grown quickly into being fat. Garmin Connect, an app which cannot do anything without cloud, 340 MB. Do yourself a favor an check what your banking app requires, you won't get away with less than 200 MB and only the good ones can show you our balance locally - which requires how many kilobytes to store? Messengers like Signal are also growing an growing, Signal is right about 170 MB.
Back to Linux:
I can only appreciate the path to system wide ABI/API stability on Linux, with Systemd, GLIBC, LIBSTDC++ valuing stability and the recent changes around Gtk3 and Gtk4 the situation may become even better. I think Flatpak can be improved, it should be improved. But we don't need another competing standard, before we didn't tried hard to improve the existing solution.
I hope that at one day Canonical will start collaborating with the community and especially Red Hat, they always implement a less favorable solution (Mir, Unity, Snap, Upstart...) lose and harm Linux. Collaborate with Flatpak, add a payment solution and share the revenue with Red Hat and others?
I download a Flatpak from the Pop OS store and it works. It installs only in my profile, so another user on the same machine doesn't have access. You can't do that with a .deb!
I've never got into dependency hell where I need to apt-get a specific version of a library from a dodgy PPA.
If I uninstall it, Flatpak doesn't leave acres of cruft strewn around my disk.
I don't see how randomly installing a Flatpak is any worse for security than compiling the source myself. The permission model on Linux is far worse than Android - so I just hope for the best anyway.
Snaps never worked right for me - and seemed to frequently break. But all my Flatpaks have run fine and at full speed.
Does it take up more disk space? Sure. But that's a trade-off I'm willing to make in order to just download and run with no extra work.
Sure, there are some efficiency and security gains to be made. But I'm much happier with a Flatpak world than the alternative.
Pretty sure you can by doing "dpkg --root=$HOME -i mypackage.deb" or something like that, long time ago I used dpkg, but it should be possible with some flag.
Otherwise I agree, Flatpak is a breath of fresh air!
As a sibling comment already noted, using "--root" doesn't always work and a q&a mentions the problems:
https://askubuntu.com/questions/28619/how-do-i-install-an-ap...
https://serverfault.com/questions/23734/is-there-any-way-to-...
edit: should have written "where does dpkg put the files of the package ?" or "where does that put the Deb's files ?", sorry.
It's worse because you're likely to install more packages using something like Flatpak than you would by downloading random binaries or building from source. That wantonness isn't justified given the current state of security on Linux.
(I'm just regurgitating OP: "Flatpak and Snap apologists claim that some security is better than nothing. This is not true. From a purely technical perspective, for apps with filesystem access the security is exactly equal to nothing. In reality it’s actually worse than nothing because it leads people to place more trust than they should in random apps they find on the internet.")
... what makes you say that ? if I want to run something, it'll run one way or another no matter how hard the "system" wants to prevent that. Even if it's patch.exe downloaded from a 2004 russian website.
No. It doesn't. You still need to trust the people who package the thing.
A working packaging system would give the user ultimate power to manage access to resources by each app. Overriding or mocking, if the user so decides, whatever access does the app believe to need. Flatpack does not give you such power, it removes this power from you and assigns it to the packagers. Thus, not only it doesn't work: it works against you!
EDIT: The "dependency hell" issue is separate, and is solved more easily by distributing static binaries.
Flatpak and Snap have never claimed to solve the trust issue though. Flatpak allows you to add your own repositories and thus developers can package their own applications. So if you trust the developer enough to run their software, you should be able to trust them to package their own app with.
If anything, isn't the flatpack situation better in that regard because the end user is more likely to have a sandbox?
How is this any different than sudo apt install foo?
As the number of dependencies for building an application grows, it becomes exponentially harder to shake the tree. This used to be the role of Linux distributions, they were acting as a push-back force, asking projects to support multiple versions of C libraries. This was acceptable because there is not C package manager.
Now that each language has their own package manager, the role of distributions have faded. They are even viewed as a form of nuisance by some developers. It's easier to support one fixed set of dependencies and not have to worry about backward-compatibility. This is something all distributions have been struggling with for a while now, especially with NodeJS.
This trend is happening on all platforms, but is more pronounced in Linux because of the diversity of the system libraries ecosystem. On Windows and macOS, there are some SDKs the application can lean on. On Linux, the only stable "SDK" is the Linux kernel API.
This is a very good observation I think. Basically, what Linux distributions have failed to do is to create a higher level SDK for their platforms - instead, they rely on a soup of libraries and on the package maintainer model to know which of these is which.
Basically, they are failing to provide a true OS: there is only a Kernel + various user space apps, from systemd to KDevelop, with no explicit line in the sand between what is the system SDK and what is a simple app bundled with the OS.
Not really no. It's both simpler and more complicated.
At the very low level, everyone expects to get a POSIX system. At the user level, most users use either Gnome or KDE (well mostly Gnome to be honest but let's pretend) which provides what could be called a highlevel SDK for applications.
That leaves the layer inbetween which used to be somewhat distribution specific but is seeing more and more consolidation with the advent of systemd and flatpak.
Mac and Windows solve this by creating their own languages for app development and then porting the system services to be provided in Swift and C# or whatever in addition to the original C implementation. There is no corporation behind most Linux distros willing to do that other than Canonical and Redhat, but they're perfectly happy to just stick with C, as is Gnome (KDE being perfectly happy to stick with C++). Most app developers are not, however.
For what it's worth, Redhat did kind of solve this problem for the Linux container ecosystem with libcontainer, rewriting all of the core functionality in Go, in recognition of the fact that this is the direction the ecosystem was moving in thanks to Docker and Kubernetes using Go, and now app developers for the CNCF ecosystem can just use that single set of core dependencies that stays pretty stable.
At the same time, the tools which solve this really shine. You inevitably run into these issues with random third party dependencies on other platforms, too, but it's further from the norm, so you end up with awful bespoke toolchains that contain instructions like "download Egub35.3.zip and extract it to C:\Users\brian\libs."
Developers on GNOME deal with this regularly, partially because of our lack of a high level SDK. So one of the tools we have to solve this is first class support for flatpak runtimes in GNOME Builder: plop a flatpak manifest in your project directory (including whatever base runtime you choose and all your weird extra libraries) and Builder will use that as an environment to build and run the thing for development. This is why pretty much everything under https://gitlab.gnome.org/GNOME has a flatpak runtime attached. It's a young IDE, but that feature alone makes it incredible to work with.
They have consistently provided a high level SDK for their OS. With elementary OS 6, they moved their high level SDK to using flatpak (not flathub) as a distribution mechanism.
[0]: https://en.wikipedia.org/wiki/Common_Desktop_Environment
Something I took away from that is that it is Linus himself, personally, that is responsible (in a way) for Docker existing.
Docker depends on the stable kernel ABI in order to allow container images to be portable and "just work" stably and reliably. This is a "guarantee" that Linus gets shouty about if the kernel devs break it.
Docker fills a need on Linux because the user-mode libraries across all distributions are a total mess.
Microsoft is trying to be the "hip kid" by copying Docker into Windows, where everything is backwards to the Linux situations:
The NT kernel ABI is treated as unstable, and changes as fast as every few months. No one ever codes directly against the kernel on Windows (for some values of "no one" and "ever".)
The user-mode libraries like Win32 and .NET are very stable, papering over the inconsistency of the kernel library. You can run applications compiled in the year 2000 today, unmodified, and more often than not they'll "just work".
There just isn't a "burning need" for Docker on Windows. People that try to reproduce the cool new Linux workflow however are in for a world of hurt, because they'll rapidly discover that the images they built just weeks ago might not run any more because the latest Windows Update bumped the kernel version.
I read all the way through this and wept: https://docs.microsoft.com/en-us/virtualization/windowsconta...
Yet most language-specific package managers are deeply flawed compared to distro package managers. But i agree with your sentiment that beyond LTS for enterprise, the deb/rpm packaging paradigm is becoming obsolete. I believe nix/guix form an interesting new paradigm for packaging where there is still a trusted third party (the nix/guix community and repo) but where packaging doesn't get in the way of app developers.
I'm especially interested in how these declarative package managers could "export" to more widely-used packaging schemes. guix can already export to .deb, but there's no reason it couldn't produce an AppImage or a flatpak.
Now, with Flatpak, each runtime is an SDK on its own. However, unlike Windows and macOS, specific runtime is not being bound to a specific OS release, but to the app requirement.
The first time I noticed that trend was on macOS, where applications bundle most of their libraries, and binaries compiled for multiple architectures.
Then we had Electron apps that ship with their own full copy of a browser and dependent libraries.
When NPM was designed, they observed that resolving colliding versions of the same dependency was sometimes difficult. Their answer was to remove that restriction and allow multiple versions of the same library in a project. Nowadays, it's not uncommon to have 1000+ dependencies in a simple hello world npm project.
Our industry is moving away from that feedback force that was forcing developers to agree on interfaces and release stable APIs.
In practice I don't really see a difference, I link against Qt on all platforms, win, mac, linux, wasm, android... and that's it.
Let's take an extreme example. If I build an app on Ubuntu and then try to run it against the system libraries on Alpine, it'll fail, because Alpine is built against a different libc. We can simply declare Alpine out of scope and only support glibc based distributions, but if we want good cross-distribution support we're still going to be limited to what's shipping in the oldest supported version of RHEL. So let's skip that problem by declaring LTS distros out of scope and only target things shipped in the last 3 years - and now apps can't target any functionality newer than 3 years old, or alternatively have to declare a complicated support matrix of distributions that they'll work with, which kind of misses the point of portability.
In an ideal world distributions would provide a consistent runtime that had all the functionality apps needed, but we've spent the past 20 years failing to do that and there's no reason to believe we're suddenly going to get better at it now. The Flatpak approach of shipping runtimes isn't aesthetically pleasing, but it solves the problem in a way that's realistically achievable rather than one that's technically plausible but socially utterly impossible.
Flatpak is a pragmatic solution for an imperfect world - just like most good engineering is.
Edit to add: one of the other complexities is that dependencies aren't as easy to express as you'd like. You can't just declare a dependency on a library SONAME - the binary may rely on symbols that were introduced in later minor versions. But you then have to take into account that a distribution may have backported something that added that symbol to an older version, and the logical conclusion is that you have to expose every symbol you require in the dependencies and then have the distribution resolve those into appropriate binary package dependencies, and that metadata simply doesn't exist in every distribution.
[1] http://refspecs.linux-foundation.org/LSB_4.1.0/LSB-Core-gene...
AppFS lazy fetch self-certifying system with end-to-end signature checks way to distribute packages over HTTP (including from static servers) so it's federated.
Since all packages are just a collection of files, AppFS focuses on files within a package. This means if you're on Alpine Linux and want to run ALPINE from Ubuntu (assuming Ubuntu used AppFS) then it would use Ubuntu's libc. This is pretty similar to static linking, except with some of the benefits from dynamic linking.
CERN has a similar, but more limited system called CernVM-FS [1]. AppFS isn't based on that and I only learned about it after writing AppFS.
AppFS is based (in spirit, not in code) on 0install's LazyFS. Around the turn of the century I was trying to write a Linux distribution that ONLY used 0install, but due to limitations it wasn't feasible.
This is possible to do with AppFS and how the docker container "rkeene/appfs" works.
Why should that requirement be considered so extreme when, in the Windows world, applications are often required to work as far back as Windows 7 (or were until a year or two ago)?
Plus, when linux apps try to ship private libraries, as official chrome packages do, that gets quite some backlash from distro maintainers.
Same problem that flatpak is solving (shiping binaries that would work across many distros) already have at least two solutions - static binaries (where possible, golang is great here), or shell wrappers that point dynamic linker to appropriate private /lib directory.
Among the 3 solutions here flatpak is the most complex, and least compatible with what advanced user might do - run stuff in private containers, with different init, etc.
You haven't needed shell wrappers to do that for a long time, just link with -Wl,-rpath,\$ORIGIN/whatever/relative/path/you/want where $ORIGIN at the start resolves to the directory containing the binary at runtime.
Of course other things like selecting between different binaries based on architecture or operating system still requires a shell script.
Have you looked into Nix?
Though, here is e.g. https://github.com/matthewbauer/nix-bundle, which is supported as an experimental command in nix 2.4.
You can optionally support newer functionality with a single binary by dynamically loading libraries resolving functions at runtime using dlsym or API-specific mechanisms (e.g glxGetProcAddress).
I mean, on windows if I use a given libc, say msvcrt or ucrt (or heck, newlib with cygwin) I have to ship it anyways with my app. Linux makes that harder but in practice there's no way around this.
For e.g. - firefox (https://www.mozilla.org/en-US/firefox/all/#product-desktop-r...). Win64 installer is 50mb. MacOS installer is 130mb. Linux 64-bit is 70mb.
Same is the case with Chrome - https://chromeenterprise.google/intl/en_US/browser/download/... . Windows MSI is 79mb. OSX PKG for the same is 195mb.
So by that measure - OSX has already lost right ? The entire package-everything-together has won in probably one of the largest OS ecosystem that exists today. Size does not matter... mom-proof experience does matter.
>How much progress could we make if Steam deprecated their runtimes, abandoned containerization for new games, and let all new games just use the native system libraries? How loudly do you think gamers would complain if a distribution upgrade broke their favourite game?
This OTOH does not exist. Linux gamers are 1% of either the gaming market or the desktop computing market. All gamers either dual boot windows...or android. Even for those who game on Linux, they do it on an abstraction layer like Wine. Which is what Valve maintains - https://github.com/ValveSoftware/wine and https://github.com/ValveSoftware/Proton
Valve only needs to maintain wine & proton for a distribution. The games themselves ? well, they are windows only. Nobody compiles for Linux.
And 1.2% is slightly more than half the mac market share on steam. Should mac users also be ignored?
The games that run on proton have not been compiled for linux.
OSX users are 2.56% of Steam. Linux is 1.13%
https://store.steampowered.com/hwsurvey/Steam-Hardware-Softw...
This isn't true at all. I'm afraid you've misunderstood the situation. Get on steam and check for yourself. There are many games compiled for Linux specifically (in addition to other platforms) that do not require proton. They use some ubuntu or debian packages for their libraries and have no linkage to anything windows.
In fact, this reinforces it. OSX believes file size matters so little they’re willing to double file size simply so users don’t have to pick x64 or x86 when downloading the app (and if they’re using the App Store the App Store could do it for them, but even that Apple thinks is too much complexity).
Having said that, I never bothered installing it on my current laptop which sort of reinforces the point others are making about being less concerned with application size.
Sadly, it seems the Linux world just can't wrap its head around the idea of managing applications with the same simple mechanisms we use to manage regular every day files. It would seem that Linux Desktop users just love having inflexible management tools do it instead. Well, there is AppImage, but unfortunately its use is not wide spread.
Seems to me they love treating application management like one manages dependencies in a software project. There are a lot of parallels with the paradigm. Its clear where the notion came from. And I think in the context of unix/linux history it makes sense. If I were an MIT grad student in 1978 there probably isn't much difference between a library and an application practically.
But that was over 40 years ago. There is a clear distinction now and the overwhelming majority of PC users are not MIT grad students. Applications should be easy to install. People don't care that there is more than one version of some dependency. They don't care that your repro doesn't have what they want. They have work to do.
First off, 1.13% of Steam users are on Linux [1], and given how many users Steam has, just 1% of that population should be able to make quite a fuss. Secondly, Valve does need those containerized runtimes even if most games run on Proton, since Proton also depends on those libraries, so not sure why you're dismissive...
[1]: https://www.phoronix.com/scan.php?page=news_item&px=Steam-Li...
Until I saw this - https://news.ycombinator.com/item?id=28978086
Yeah sure, Linux gamers are more engaged, etc etc. But for a indie developer making games...it's like 5% of the market. Probably does not even recoup development cost.
I realised that I am probably doing a disservice to the indie gaming developers, by insisting on a militant stance.
Wine is fine. I'll just play it on Linux. Let them make more money.
I personally don't use it, so I can't tell how good the story is with 2 different versions. I've heard lots of upgrade troubles from mac users when a new OS release comes out, but maybe the authors are usually better in keeping a version for current-1 around, or the latest version compatible with current-1? Then it's still only 2 versions and not 10 distros.
For flatpak there is an api you can use to change permissions but for snap what I can remember that is not something you, the user, can change, that is up to the maintainer to enable them.
Applications like an IDE uses a lot of different resources so I gave up on using that as a flatpak, luckily Jetbrains ships their IDEs as tar.gz binary package you can use instead.
Flatpak works best when the application is very self contained, like Spotify, it streams music from an internet service, it doesn't require any special permissions.
I used Bitwarden as a flatpak, it had limited file access with one granted directory (Downloads), I was going to download an attachment from the Bitwarden application, the file saving dialog started one directory up from Downloads, you had to pick and open the Downloads directory first before saving, however I managed to save my attachment in that starting location outside of Downloads, some void directory that I never found.
One time I didn’t paid attention, installed Snap package instead of the native binaries, and then I spent several hours debugging “access denied” status returned by mq_open https://man7.org/linux/man-pages/man3/mq_open.3.html kernel API in my program which only happened when called from a C# program, but not C++ program running under the same user account.
The baseline tools, like all the GNU standard Unix tools, and other command line tools are still managed by DNF, APT, whatever, and then another set of apps are managed by Snap. It's pretty confusing, especially if you never use the "app stores" otherwise. You quickly end up with: apt install <some command line tool> and apt install gimp, but now that's a different Gimp than the one in App Store. You also can't really remove Gimp from apt, because it's weird that apt can't install everything.
I haven't really used Flatpaks, but Snaps is confusing (to me at least), because it's doesn't actually replace APT and pollute the output of "df" and "mount".
I'm also concerned that some packages will never be updated, or stuck on a library with security issues. Perhaps that me, but I'd rather that that one application breaks.
Nix/Guix seems like the only one that's trying something different enough to address some of those other use-cases, but the cat's out of the bag for most people.
I argue that's simply not true. Scripting languages came up with package managers because the largest platform (Windows) does not have a package manager. Actually that scripting languages implement package managers is a very good argument that their functionality is desired. As a side note, every language package manager is much worse in my experience than all of the system package managers I ever used.
Maybe that's because nix and guix don't interop despite having rather similar principles, and because we mostly don't have GUI for it (like GNOME Software has integration for flatpak) nor desktop integration (like AppImageLauncher). Are you aware of work being done in this space?
That's not the case anymore. If you're aiming higher than .net framework 4.x, you have the option of either self-contained package (your app + .net core - min 70MB - appimage approach), or framework dependent (requires .net version at installation - flatpak approach).
I find the "things are not shared" section a bit weird though. Yes, not everything uses the same base and that's sad. But I've got 8 flatpak apps and 7 of them do share the runtime. We'll never get 100% unified and deduplicated approach here - in the same way we'll never unify Gtk/qt/tk/wx/...
> Flatpak allows apps to declare that they need full access to your filesystem or your home folder, yet graphical software stores still claim such apps are sandboxed.
Yes, distros really need to start talking about permissions / capabilities rather than some generic "sandbox". Some apps need to effectively read the whole disk. But maybe they don't need to write to home. Or maybe they don't need internet access. We'll have to get mobile-phone-like permission descriptions at some point, because a single label just doesn't describe the reality.
There is a choice, and it's between using a later framework, and relying on the OS one.
Applications such as VLC and GIMP which 'ship' with access to whole filesystem permission is an eternal dilemma. Would you rather the authors ship without access to this permission and break app functionality ? Or let them ship with restrictive permission and allow users to manually enable respective permissions to regain function when the software breaks? It is easy to see the feasible decision here that works for all parties here. The permission on the different accessibility store however has confusing labels on sandboxing, I agree with the point that the store should make this as clear as possible.
I think the bigger point the article misses is the ability to control these permissions at this level. Even if the software author ships 'dangerous' default permissions, the user can always revert this decision and sandbox it effectively if they so wish.
Flatpak is a crucial needed fix for the Linux package distribution problems highlighted elaborately by this article but in my humble assessment, the benefits to this solution massively outweigh the nuances such as the one the author mentioned about package sizes.
to give a data point, I work with a lot of artists who use Macs and none of them use mac app store apps because of endless issues when accessing the file system
I speak only for myself, but I will never enable Flatpak on any of my devices. A lot of other Linux users share the sentiment.
If the steam runtime didn't exist, most gamedevs would only target the most popular distro - probably the current Ubuntu LTS, and you would have to recreate the runtime on your distro of choice.
And once there's a new Ubuntu release you would also have to recreate it there (or the game updates and now you'll have to recreate it on the now-old version).
The choice isn't between steam runtime and a utopia, the choice is between the steam runtime and something much worse. Linux libraries simply aren't stable enough in API and ABI.
If you are a Linux distribution maintainer, please understand what all of
these solutions are trying to accomplish. All your hard work in building
your software repository, maintaining your libraries, testing countless
system configurations, designing a consistent user experience… they are
trying to throw all of that away. Every single one of these runtime packaging
mechanisms is trying to subvert the operating system, replacing as much as
they can with their own. Why would you support this?
Uh, because User Freedom™? Or are we supposed to consume Linux applications exclusively via the maintainer-supplied packages? Also, is this part: "testing countless system configurations, designing a consistent user experience", ― something that actually happens? Given the example in this very article about Fedora going to auto-convert all its rpm packages to Flatpak it sounds, as the youth says nowadays, "kinda sus".However, as can be seen with AppImage, which seems to be strictly focused on dependency management, that results in bloat.
It would be great if we could have a sandboxing solution that ignores the dependency management altogether.
The problem is not with Flatpak or Snaps (or Docker). The problem is that the fragmentation in the actual, live-deployed Linux system runtime is huge. At one point, I know Canonical suggested a common, shared baseline for all LTS editions of all distributions at some point, but there was no interest from Red Hat in particular (the other "largest" player). It's not too surprising, since Red Hat is the biggest contributor to the base GNU/Linux system, so it can decide how it deals with it and push it onto others.
Backwards compatibility is hard in software in general, but I think simple extensions of tools like apt and apt archives (like Launchpad PPAs) would have allowed binary distribution of software in an already familiar way where dependency management is not a solved problem per se, but has all the familiar gotchas.
As for app stores, that's surely an effort to extract some profits, but it's also what big customers are asking for for their IoT solutions.
I disagree. Pretty much every desktop OS in the world manages to get quite a lot of backwards compatibility without a whole lot of trouble, except for Linux Desktop.
That suggests the problem isn't hard, it's just that the culture of Linux Desktop is incompatible with the concept.
I can't find the original article anymore, but you can find references to it on Joel Spolsky's blog:
- https://www.joelonsoftware.com/2000/05/24/strategy-letter-ii...
- https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...
Referenced posts probably live somewhere on https://devblogs.microsoft.com/oldnewthing/ today, but the quoted snippets should tell you that it's not that easy at all, and that Microsoft is investing heavily in maintaining it.
That's not at all my experience with Windows or MacOS. Sure some older apps work fine, but certainly not all of them.
The documentation seems to focus on filesystem access sandboxing. Is there something with practical suggestions on how to sandbox things like webcam access, clipboard, screen (for screen sharing/recording apps), networking... with bubblewrap?
(I know some of that is visible in the filesystem, but not all of it is)
Instead, I suffer with Open Build Service https://build.opensuse.org/. It's kind of cool, free and can build for multiple distros, but making it actually do so is a pain. But I still prefer this over flatpak&co both as user and as developer.
It just doesn't look like "the future of application distribution", https://nixos.org/ does.
I heard about that before, but never tried. How in practice do you deal with conflicting library versions?
> It just doesn't look like "the future of application distribution", https://nixos.org/ does.
Strongly agree. I just find it sad that nix/guix doesn't have proper desktop integration (to my knowledge), and that nix requires nixGL hacks to start any GUI on foreign distros.
It's a mess, without meaningful documentation and lots of bugs I've spent hours digging the source code to get any idea of how to use it.
I build everything static except Qt. My project is small enough with only 4 external libraries, so it's ok atm.
The trust is outside of the scope of package managers
I actually like that for apps that have no business reading any files on my system they simply cannot do so. Sure, you can point to apps like GIMP and such that have chosen to give access to the file system by default to make it easier for users while portals get polished. If that bothers you then simply use Flatseal to change those permissions. Meanwhile apps like Spotify are entirely restricted from accessing any of my filesystem. Likewise Discord can only access files from a specific directory or a youtube-dl style tool like VideoDownloader also cannot access any of my files and always includes the latest version of ffmpeg and youtube-dl regardless of distro politics.
The problem is that we still need a solution to stop hacked apps from calling fopen() on unrelated files. The article ignores how bad the native runtime security solutions are - SELinux is nigh unusable, no distro has many apparmor profiles, etc.
Flatpak and snap use containerization because Linux's security modules aren't common across distros and aren't very usable.
Flatpaks and portals are moving towards that direction (according to my non expert opinion).
All this friction and complaints, is just normal for such a big transition in how desktop apps should be written. I remember android devs going up in arms when there were significant API changes in past android versions. The situation for linux looks similar to me (aside from the fact the ecosystem doesn't have money to throw at the problem).
Flathub is a strange beast. There's no mention of security on their wiki. They stopped publishing minutes (or moved them elsewhere?) in 2017 (https://github.com/flathub/flathub/wiki). They have a buildbot for automated updates from developers, but they accept binaries anyway (e.g. https://github.com/flathub/us.zoom.Zoom/blob/master/us.zoom....), so what's the point? It appears to be a fairly amateur effort, and yet is at the center of the infrastructure Red Hat and Gnome are pushing. I'd love to see some white hat activity targeted at compromising it, to demonstrate the shaky foundations.
But on the other hand, it's nice that I can run Zoom sandboxed (apparently - it's not obvious what the granted permissions are: https://www.flathub.org/apps/details/us.zoom.Zoom). It's nice that Jetbrains and Zoom have a way to publish apps that can run on all distros. It's nice that I could rollback a version of IntelliJ that was buggy with a single snap command that took 5 seconds. The goals are good.
I wish Linus took more of a BDFL approach to the desktop occasionally. Ubuntu & Red Hat need to sit down in a room and have a constructive conversation to converge Snap and Flatpak into something new, deprecating the infrastructure built to date, and fixing some of the glaring problems. There's room for both to make money without further diverging the ecosystem.
--socket=x11 is a massive hole in the sandbox, since x11 does not have a security model - any client can observe and manipulate any other client. for x11, a viable solution would be running flatpak apps in xephyr, but flatpak doesn't do that. long-term, wayland is a better solution.
$ flatpak install flathub com.microsoft.Teams
com.microsoft.Teams permissions:
ipc network pcsc pulseaudio x11 devices file access [1] dbus access [2] tags [3]
[1] xdg-download
[2] org.freedesktop.Notifications, org.freedesktop.secrets, org.gnome.SessionManager, org.kde.StatusNotifierWatcher
[3] proprietary
ID Branch Op Remote Download
1. com.microsoft.Teams stable i flathub < 86.0 MB
Proceed with these changes to the system installation? [Y/n]:
and you can query an apps permissions at any time: $ flatpak info --show-permissions com.discordapp.Discord
[Context]
shared=network;ipc;
sockets=x11;pulseaudio;
devices=all;
filesystems=xdg-download;xdg-pictures:ro;xdg-videos:ro;home;
[Session Bus Policy]
org.kde.StatusNotifierWatcher=talk
org.freedesktop.Notifications=talk
com.canonical.AppMenu.Registrar=talk
com.canonical.indicator.application=talk
com.canonical.Unity.LauncherEntry=talk
GNOME Software in GNOME 41 also has a much better list of permissions than the version shown in this article.https://blogs.gnome.org/tbernard/2021/09/09/software-41-cont...
This hasn't been the case for decades. X has the ability to isolate clients and/or only allow partial access to other clients. For example try this:
$ xauth -f lockthisout generate :0 . untrusted
$ XAUTHORITY=lockthisout xterm
The first line generates a new X authority file with a single entry that causes the display :0 to be untrusted. Then whatever runs inside xterm (or whatever you launch) will be considered as untrusted, e.g. you wont be able to run any OpenGL application or xdotool (it actually crashes after saying the XTEST extension isn't available, i guess it can't handle that case).Note that this can be worked around by running "export XAUTHORITY=~/.Xauthority" since pretty much everything uses ~/.Xauthority for the local user (which is connected as trusted). This one can also be addressed by storing the session X authority file somewhere else (AFAIK some distros generate a new randomly named one for every session), or you could use something like AppArmor or SELinux to restrict the untrusted application's access to the session's X authority file. Or just run it as a different user who is always untrusted, though that can be inconvenient for some types of applications.
That said there are some issues, mainly because the whole X security functionality hasn't seen much attention in recent years. For example an untrusted client can't use 3D accelerated graphics but you may actually want to run a 3D app while not giving it access to everything else. Though that isn't due to some inherent limitation of X or whatever, just that since it never gained much attention (since the entire GUI sandboxing thing was never much of a priority) nobody bothered to work that part out.
In the future it might just be better for X desktop to be running untrusted applications under a Wayland "pseudocompositor" that simply lets X handle the actual window management (current Wayland compositors that can run under X create a window and treat it as a display which isn't exactly nice from a UX perspective and doesn't allow -trusted- X programs like xdotool work with Wayland windows).
In general, GUI package managers w/ Flatpak support show you the permissions, as does the CLI upon an attempted install.
Only if the snap was built with classic confinemnt in mind. Otherwise, just slapping --classic on a random snap does nothing, there's even a warning display about that. Classic is very much the same as a random 3rd party vendor app built and unpacked under /opt. Unfortunately some software, especially IDEs and languages are unsuitable for running under confinement and need to be distributed this way. By passing --classic you give your consent to use an app package in this sub-par way.
FWIW, perhaps you meant --devmode, which as the name implies is mostly for developing a snap? snap install --help describes that as:
--devmode Put snap in development mode and disable security confinementI think the solution to that should be something other than removing the sandbox. I see "portals" referred to elsewhere in this thread w.r.t. flatpak; does snap not have similarly?
> By passing --classic you give your consent to use an app package in this sub-par way.
Sure, but it's a terrible name. It should be --unsandboxed or --fullaccess. "Classic" sounds like a mode you'd want.
Flatpak brings Linux to what OSX had like 10 years ago. It just works
Regarding download size? Even I, living in a third world country, have an internet connection good enough to not care about download sizes...
Regarding file redundancy. If that is much of a problem for anyone, I'm sure that could be dealt with at the file system layer: hash/detect duplicate files and compact them, then copy on write. Personally, HDD space hasn't been an issue for me on PC or laptops in a long time.
Flatpak finally just work and we have the technology to make it usable. Moreover I argue that flatpak IS the future as those 2 previous points will become more irrelevant as time goes by.
As for me I'll keep using Flatpaks as they work great and have wide support. Alternatives are either obnoxious to use (Nix and Guix) or lacking in basic functionality (Appimages).
It has a namespacing feature, so f-ck'n use it. Instead, they intentionally produce collisions.
Providing and sharing common runtimes could have worked pretty well if developers actually used a small set of common runtimes instead of picking from one of a gazillion slightly different rebuilds.
As a replacement for "native" package management? No. Flatpak makes sense if you have a small number (< 5) packages that you want to run from upstream so you can always have the latest version while continuing to use your distro's native packages for everything else.
Unfortunately (and ironically) it's ill-suited for games, which need to have the latest GPU drivers available, which is antithetical to the whole idea of a stable, universal base system.
All the runtimes on Flathub are built on top of the standard fd.o runtime, so the OSTree-level dedup should generally work for them.
> Unfortunately (and ironically) it's ill-suited for games, which need to have the latest GPU drivers available, which is antithetical to the whole idea of a stable, universal base system.
I believe there's an extension available for the latest Mesa builds, so you can opt for the bleeding edge and then just remove the extension if things break.
There are several "Linux" operating systems that use an immutable base image: Fedora Silverblue, Endless OS, purportedly Steam OS 3. The idea here is the distro packages are great for composing your base system, but once that is done, the entire `/usr` filesystem is frozen. You can upgrade it or roll it back atomically, and in some cases you can do some magic to apply smaller changes (Silverblue has a nice implementation with `rpm-ostree`), but distro packages are intentionally not a typical end user thing. In this kind of OS, the best way to install apps is using something like Flatpak. And that's very much what Flatpak is designed for.
So…
$ flatpak list --app | wc -l
94
And it works well :)
This sound like a luddite that is against machines. How about embrace the revolution and then you can save a huge amount of work and focus on other things. There are enough things distor people could focus on instead.
I don't buy any of the technical arguments here and it sounds mostly like moralizing argument rather then anything else.
And the argument that it is a walled garden is simply nonsense. Its not monopoly any more then any default repository is.
> I encourage you to study the technical arguments until you understand them well enough that they become convincing.
I have been using flatpak since it came out and many other people in this thread have already pointed out the failure of the technical arguments, no need to do that again.
The luddites were not anti-tech from a moral perspective, but understood mechanization of work to profit the bosses and go against the interests of workers (artisans) and therefore practiced sabotage. Despite this initial mischaracterization, i believe the metaphor holds and the author is against this application of this technology precisely because they believe it produces a net negative impact across the ecosystem.
However I still think moral is the right word. The consequences are consider amoral. So its is very much a moral perspective.
And I think the argument that it is bad for the ecosystem is fundamentally wrong.
Flatpak in my book rocks, and it's ahead of macOS DMG files and light years ahead of the Windows .exe downloaded from the Internet strategy.
First, the suggestion that permission should be prompted when running the app, the app using some sort of API. This would mean that Linux app should become tightly coupled to Flatpak. I have no idea how it would be possible to convince all developers, distributions, vendor to adopt Flatpak. I think it would be largely ignored so an ineffective way to bring those change to the Linux ecosystem in the end.
Then, I don't have the real answer, but prompting users with questions when they use the app is the worst approach to security. Being tech literate helps understanding what's happening, but people with less understanding answer those out of annoyance because they're trying to do something and, all of a sudden, the operating system gets in the way. The choice is not driven by security concerns but by frustration or urgency.
I can't think of any operating system that found the right solution yet, but I do think that the Flatpak approach is by far the smartest. Just naturally give access to things by following user choices (just give access to the select file once the file is selected, just give access to the screen the user choose to share once the user choose to share it). No more stupid prompt that gets in the way of things.
Speaking of runtimes, it's exactly the same problem. For Flatpak to be adopted, it would require all distribution to invest a lot in it. How would it work for Gentoo and its use flags for example? How to deal with different versions of dependencies?
What would happen would likely be for distribution to build the equivalent of runtimes, but on their side, allowing to have several versions in parallel. So in the end, it would be a similar situation, but much more messy as each distribution would have to do the job. Is having those runtimes an actual problem or the solution that Flatpak came up with to the problem of fragmentation in distributions? In my opinion it's the second.
In my experience, Flatpack and their ilk make it easier for end-users to install applications and for developers to distribute them.
Alternatives may be orders of magnitude better from a performance, size, and/or complexity standpoint but until they're at least as easy for both end-users and developers they'll never reach critical mass.
Spotify and telegram open up pretty much instantly for me. Ubuntu 20.04
What kinda annoys me though is that file dialogs open in the background. Now I am used to check for it, but when the main window is still in front, yet blocked, because focus is in the file dialog modal, that can get confusing for a moment.
AppImage is ok for me too, the application needs to be "complex" enough to justify the download size though. Precisely what the KCalc example from the article does not do. I didn't even know there is a daemon thingy that auto-integrates them, usually I put them in a folder, grab an icon somewhere and place it next to it with same name, then add an entry in OS manually, so it appears in app starter, can be placed on task bar, etc.
It's just a few clicks and if I ever get annoyed by it because I find myself doing it several times a day, I should ask myself if I am maybe just installing way too much stuff? :)
Edit:
Wanted to mention there is an application called "FlatSeal" that lets you view and (to a degree) edit permissions of Ftatpak Apps. https://github.com/tchx84/flatseal (I use it for spectating only on occasion, because too much fiddling will probably break the app at some point).
For most updates those download sizes are not really representative for the actual update size. For example right now I had two updates available with a total size of 336.5 MB + 275.2 MB. However `flatpak update` actually only had to fetch 17.4 kB + 4.4 kB and was done in a few seconds.
I'm also familiar with this approach, but that's unfortunately something i can't easily teach my friends. AppImageLauncher is arguably more user-friendly, and i wish it were more commonly distributed as a distro default.
This tired refrain shows an obscene lack of gratitude for all the things that just work. Yesterday evening I participated in a Twitter Space on gratitude for all the great things we have as software developers, and I will repost the recording here when it's available.
Edit: My mistake; the recording was already posted: https://www.youtube.com/watch?v=U10SuAHV8kQ
Currently the only app in my laptop with snap is Brave. I remember when I wrote the apt install brave, there was a line in the console suggestig to use snap.
So for me, as long is anecdotal, it's fine. I agree with the author, it's too much bloat, it doesn't matter that disk space is cheap.
Re disk space: no mention of Nix? That also solves updateability for security issues. Another solution is Gentoo's -- deliver the sources and recipes, (re)build as needed. That could also work in userspace.
It would be nice to be able to 1) distribute Linux GUIs 2) let friends install them easily 3) have the friend edit the GUI app logic and run the modified version. AFAIK, this is currently only possible with apps based on interpreted languages (e.g., Python+GTK), but even there you'd need to guide your friend to install the required packages.
Re "the state of software complexity in 2021": Ignore complex software, and just don't include it on your system. Start with a subset, and add only what you need. That's how OpenBSD remains manageable. If people want to extend, they can go ahead. But you don't have to fix the world.
No, you just trigger the download on install to get the 15th version of the runtime
A non-flatpak example is Electron. No one cares how big it is. It works.
I download huge games from Steam all the time and have never looked at the size of a game even a single time... in years. Disk space is still cheap on end users machines.
As long as the calculator works I do not care how much disk space it takes up.
As someone who's helped many friends/neighbors struggling with limited disk space (whether on desktop or Android), i don't think this is true at all. I mean, end-users are usually not conscious what a reasonable size is for an app and will often uninstall one app to install another one instead of complaining of app bloat. They're still very much suffering the problem and care for it.
A lot of people care, especially Linux users.
> Disk space is still cheap on end users machines.
Not cheap for everyone.
The author promotes avoiding this redundancy by using the libraries on the host. But maybe we should go the other way, by having only the bare necessities to run the OS installed on the host, and install all applications as Flatpak.
I agree with some things though. Different versions of the runtimes should share as much as possible, so as to consume as little space as possible, and libraries retaining backwards compatibility would help with this. And applications should target the regular Freedesktop/GNOME/KDE runtimes if possible, Fedora shouldn't be doing their own thing completely separate from Flathub. And we need to get applications to use portals and not allow them to declare file system access on install time.
I believe this is the approach Endless OS and Gnome OS are using.
What the article wants is useless without fixing Linux's security modules. Very few people know how to use SELinux or AppArmor, there's no standardization between distros and apparently even RedHat has given up.
Flatpak and Snap do containerization because they have no good alternative.
P.S.
>If I ship an app for Windows I don’t have to include the entire Win32 or .NET runtimes with my app.
Note that Microsoft has been moving away from that with .NET Core/.NET. The new runtime does not come with the OS. It's still possible to create a package and have .NET installed separately, but as far as I can tell, most people prefer to package with the runtime.
Most programs link against quite a bit more than libstdc++ and you clearly can't expect users to install a bunch of packages to get their program running. Those extra libraries usually take up most of the space in a package. If you include all the shared libraries you need, apart from the system libraries in your distribution, you pretty much have an AppImage already, just as a tarball instead of a squashfs image.
I think GOGs path is not horrible, but it is much less effective at making Linux a viable alternative as a gaming platform. They literally only support Ubuntu 16.04 and 18.04.
It also struck me as odd, that the author was complaining about the complexity introduced by Proton. What is Valve supposed to do?
That's not wrong, but AppImage is focused on the UX of the application as single file, which you can move/copy around and can be integrated with the desktop launcher, which i personally don't know of standard solutions for with a classic tarball. For example, Tor Browser is a classic tarball and works great, but the .desktop generation relies on some hacks rather than a properly-defined mechanism.
> the author was complaining about the complexity introduced by Proton
That's not what i understood from the article. I mean wine/proton makes sense on its own to run Windows apps, i think the author was criticizing that their specific approach relies on steam-specific runtimes instead of using system libraries.
As a long time desktop user, I've dropped Ubuntu for Debian because of this. At some point in the past Ubuntu seemed like it was going all in on snaps. I don't know what they're doing now, but I don't care any more.
For me they work wonderfully. On Ubuntu I usually go for the Snap since it just works. Solving multiple dependency issues before I can use a tool always cost me time before. Now, not so much. Even AppImage is nice.
The fact that most software is even available on Linux now is due to the fact that developers can create one app for Linux, in stead of multiple.
Maybe I'm not familiar enough with GNU/Linux to see the drawbacks, but if it's just disk space I'll happily make that trade.
Oh no we're all doomed.
I can finally have a stable distro with released yesterday packages. As a test I tried installing GIMP on raspberry pi, a x86 bits ubuntu 18.04 machine a x64 bits ubuntu 16.04 machine and x86 bits ubuntu 20.04. Three architectures, 4 distros, 4 machines: 1 command, the same software. It took a bit long to install on the raspi but it worked.
Flatpaks, snaps and appimages finally blurs the lines between distros. If there are problems, they should be improved, not abandoned.
Of all the arguments of the author I only agree with the RAM usage part. It's true that having containerized apps will lead to much more CPU L-cache trashing which can dramatically reduce performance if you are regularly starting and stopping programs. This is indeed a problem that needs solving. I can envision a NextFlatpak that keeps a global cache of versioned libraries (kind of like Apple does in macOS) so as to try and reduce RAM usage as much as possible.
And I agree with the author on the startup time (which is also related to memory usage and CPU cache trashing). That too is a problem that needs solving.
The rest of the arguments though, meh. Are you seriously arguing the case that we need to respect 120GB NVMe SSDs because they are "expensive"? Really? I spent 255 EUR (288 USD) on a 2TB NVMe SSD (and it has 3210 TBW of endurance, one of the very highest of all NVMe SSDs!) for a spare dev laptop and I am convinced most devs would have no trouble making that purchase any time they get the whim about it. And even people in my poor country, people who are mind-blown by their new 1300 EUR salary, can still plan financially well enough to afford a new iPhone 3-6 months later. So again, not convinced here. As much as NVMe SSDs are expensive per GB, they are still well within the grasp of most users who actually need them.
As for security, it seems the author overstates the efforts of distro maintainers. They do an admirable job, this is well-known, but they really can't be everywhere so him praising them so much might be a bit misguided (as do other posters here). Flatpaks don't change the game in almost no measure there, especially if they get properly isolated (as another poster alluded to: I want to be able to stop all internet access to an app; just one example). And none of what the author states will stop a malicious app that's not in a container so really, what's his point there? That the Debian maintainers inspect 100% of all apps and will spot a malicious calculator? Come on now.
And the rest of the post is just ranting and having a beef with a few things, which is clearly visible by his change of tone. Call me crazy but such an article can't be called convincing. ¯\_(ツ)_/¯
I actually really like appimages for distributing games and small apps. The user experience is a lot like executables on Windows, where you can just download any exe from anywhere and expect it to just work. Sure, the large size can be a problem, but not every appimage will necessarily end up that huge.
The lack of sandboxing isn't a big deal in my eyes because as the author mentioned with Flatpaks, most apps end up with too many permissions anyways. Proper sandboxing needs some kind of gate keeper/moderator to pressure developers into following the rules. This can be done with app stores, but the only working app store for Linux desktops is Steam, and that's only for games. (to be fair, KDE Discover works, but it's a very poor user experience)
>The lack of sandboxing isn't a big deal in my eyes
It should be, executing random Appimages you've downloaded online is a huge security liability.
>Proper sandboxing needs some kind of gate keeper/moderator to pressure developers into following the rules
You make a good point in that a good moderation in the store is very important, but even lacking that you can still tweak the sandbox permissions yourself (very easily in fact with Flatseal) and it rocks to have that feature available to you.
>Note that the app package itself is only 4.4 MB.
Calc 98 is huge at 495 KB:
http://www.calculator.org/download.html
Xcalc (RPN) is around 200-240 KB:
Is anyone here distributing a commercial Qt/C++ app on Linux? Are you using Flatpak or something else?
EDIT: if you use CMake, which I guess you do, you can integrated it using CPack and an external generator: https://github.com/AppImage/AppImageKit/issues/160#issuecomm...
see: https://www.usenix.org/legacy/events/atc10/tech/full_papers/... and http://www.usenix.org/events/lisa11/tech/full_papers/Potter....
what I did differently was basically say that it should be a linux distribution. each layer should be equivalent to a debian/redhat package, with full dependency information between them. therefore it be easy to a) create an image (just pick the highest level things you want and the rest get resolved automatically, much like calling yum/apt-get in a Dockerfile) b) upgrade an image when you want to (i.e. similiar to upgrading an existing redhat/debian system, it just creates a new image artifact that can be tested and deployed)
you also don't have much hiding in the system as opposed to today. Yes, they might be built on debian, ubuntu or redhat, but you really can't verify easily what changes they made in the middle with ease. In my system, imagine there's a "Debian" layer repository, in general, you would end up with a bunch of easily verifiable debian layers and a small set of "user defined" layers (and when an image is deployed, the actual container layer). the user defined layers would be much harder to hide things in, i.e. it be very visible if one is overriding binaries or configuration files that you expect to come from a controlled package.
considering the amount of language and implementation that docker seems to share with mine that predates it, one has to wonder if they saw my talks / read my papers (though yes, its very possible they came up with it totally independently as well). They were active in the usenix and lisa communities (at least when they were more alive).
Just one obvious glaring issue with the article:
>Note that the app package itself is only 4.4 MB. The rest is all redundant libraries that are already on my system
Um, no. It's not redundant. Your system might get updated, replacing the needed libraries with backwards incompatible libraries, and then your app would break.
I do agree with the general idea that not limiting possible runtimes to an approved list means essentially infinite runtimes, which can balloon disk space. I think you need choice to prevent stifling innovation. At least at first. But hopefully the community will end up doing the right thing here over time.
What are developers supposed to ship their apps with? Make packages for every Linux distribution under the sun? Flaktpaks and Appimages solve a real problem. The current alternative is NOT being able to use the specific software at all because you don't have the right version of Ubuntu.
Maybe Guix/Nix are solutions for some of the pain points of traditional package management. And yes, libraries really need to focus on backwards compatibility. In the meantime, AppImage/Flatpack gets the job done. I am not out of luck when my distro does not offer the right package with the right version.
Yeeh, who enter the plateau, abandon all hope you will get away with leaky abstractions.
Wish the lazy bloat-load was build as architectural pattern into apps. It only installs when you actually use it.
This being said, I do see these systems that basically just double down on adding layers for layers' sake as more disruptive than systemd.
Sure, like it was happening 20 years ago, and we still had apps break ALL THE TIME when the user just upgraded to a new distro version or when she changed her distro. It's simply not scalable. It has already been tried. Sure, the flatpak world is not perfect, and I would never choose this option (maybe I like snap better), but it's a step in the right direction.
What about the gargantuan WinSXS folder?
What would solve this issue? A common package manager and central repository, across all Linux distributions. Then the small team only needs to package and ship to one repository, in 1 format.
But I have trouble finding packages, and I hate to compile. So I use Debian instead.
I would rather just use a distro that isn't ideal for me than start using flatpak and snap and all this other stuff. I really don't like how fragmented packaging is in the Linux world, but I will not use these prepackaged containers that have all dependencies included. They're worse.
OP might want to edit this.
Let's go through some of the issues:
> Size
My /var/lib/flatpak is 13 GB in size. That's a lot of space. On the other hand; that's about one game with content and textures, so I'm not sure that is too much of an issue. With more and more applications agreeing on the base systems, I expect this to get smaller
> Memory Usage, Startup times
Snap is ungodly slow at starting up applications and that's broken and that is a fundamental design flaw. This issue, however, is not at hand with Flatpak; it doesn't have a startup problem.
Neither Flatpak nor Snap have increased memory usage because of containerization.
> Drivers
Yes, Nvidia sucks. Do you hear, Nvidia? Mainline your driver already.
> Security
> "Flatpack and Snap apologists claim that some security is better than nothing. This is not true."
That's debatable. The same argument was made against seatbelts, child-proof pill bottles or guard-rails on cliff streets and I don't think it holds up. Even if seatbelts don't protect you from any harm, they protect you against some harm.
Currently, Linux desktop software offers no security against malicious code. The only protection the 'traditional' Linux desktop offers against malicious code is user separation, which is no protection at all. Before full security can be offered, applications need to be migrated to safer practices.
> Permissions and Portals
This doesn't seem to be a critique of any user-packaging-format but rather of how GTK implements interactions with Portals.
> Identifier Clashes
That's a problem that arises from Flatpaks decentralized nature; everyone can create a repo and add packages of any names there. I agree that it would be nice to not have these clashes. Notice that this is not an issue with Snaps, since with Snaps there's a central authority assigning the identifiers.
> Complexity
Some of the complexity cannot be avoided. Either for backward compatibility or for security. I pretty much doubt that Flatpak this will be reason that civilization collapses. (That would be shortsighted greed and NIMBYism)
> All of these app packaging systems require the user have some service installed on their PC before any packages can be installed
All app packaging systems require at least some installed component. There's no way you can make a software run on all systems without requiring at least some infrastructure. Do you want your 64-bit AppImage to install everywhere? To bad! it requires 64-bit glib to be installed.
> App stores
> [...] This is the reason Ubuntu wants everyone to use Snap [...]
This seems to limited to Snap. Again, not a fundamental design issue.
> Backward compatibility
> Forcing Distributions to Maintain Compatibility
> I believe this is partly due to a militant position on free software.
This sounds like he wants to dictate FOSS developers how to develop their software? See, I understand your frustration - but that's not how FOSS works. You cannot force anyone to spend time on stuff they don't want to do.
There's some valid criticism in there, but it just reads like an angry rant. I think the author could do better by making individual articles about the shortcomings of both Snap and Flatpak, instead of just lumping them together. Snap and Flatpak are just too different.
In the end, Flatpak is the future. No one is going to package their software for the nth minor distribution and I rather have a slightly (size)-inefficient system of packaging software rather than not have access to that software. Distributions have started to recognize that they cannot package everything and have started to reduce focus to a smaller set of core packages that work well together.
To be fair, he appears to be arguing for the current model, where armies of distro volunteers review and manually package up debs and rpms (granted that on smaller distro teams, this may be less thorough). There is an actual third party involved in this explicit review step. With flatpak the developer has a direct route to the user without any intermediary or checks.
> Size
As the author mentions, many users uses budget computers or something like a raspberry pi and not a big gaming rig, but you also have to consider the network usage, all that data needs to be downloaded, that in it self is a cost for the network for every computer running that distro.
I work from my laptop when I'm traveling, I use my phone as wifi hotspot, download gigabytes of data for installing some small app is not feasible.
And you also have to consider poor countries, in poor countries you don't have infinite downloads nor the largest drives.
I remember the old days of the critique against Windows from the Linux community, Windows forces you to constantly upgrade hardware even though that old machine still works fine. I guess this is now true for Linux too.
Distributing applications with multiple architecture binaries and the correct one selected at run time is a solved problem. The original MacOS did it as far back as the 68k/PPC transition and the concept has been supported by NeXT/MacOSX Application Bundles since the beginning.
Yes it does, because you end up loading multiple versions of the shared libraries.
There is also the memory used by the flatpack/snap daemons. Snapd is quite big since it is written in Go.
If you distribute your software via Flatpak, you can forget about me ever using it. Every computer I own has Flatpak disabled, and I will use system repos until the day I die. Plan accordingly!
People who desire to see more widespread Linux Desktop usage will some day have to come to terms with the original conception of the phrase "The customer is always right". If you continue to ignore what people want, don't be surprised when they don't show up.
Is Flatpak really secure, is it's permission system good, how does resource sharing work, whatever. I'm not going to comment on any of that in either direction.
But I do want to call out this specific paragraph:
> Apparently, developing client APIs for apps themselves is antithetical to Flatpak’s mission. They want the apps running on Flatpak to be unaware of Flatpak. They would rather modify the core libraries like GTK to integrate with Flatpak. So for example if you want to open a file, you don’t call a Flatpak API function to get a file or request permissions. Instead, you call for an ordinary GTK file open dialog and your Flatpak runtime’s GTK internally does the portal interaction with the Flatpak service (using all sorts of hacks to let you access the file “normally” and pretend you’re not sandboxed.)
Heck client APIs. This is the correct way to do sandboxing, for two reasons:
----
First, this is not something that's immediately obvious, but over time with both phone platforms and on the web have been starting to learn that applications should not be able to change their behavior or monitor whether or not they are sandboxed. This closes an important hole in app security where application developers either change/block behavior, or try to "trick" users into granting unnecessary permissions.
For a filesystem, an application should not be aware of whether it has access to every file, it shouldn't be aware of whether or not it's running on a temporary filesystem. And while it should be able to ask the user to grant it access to a directory, it should have no easily way of validating whether or not the user granted that permission -- it should just get a folder pointer back, regardless of whether that pointer is real, a different folder, or a virtual/temporary location.
A lot of existing sandboxing doesn't follow this rule. The web is normally pretty good at sandboxing, but in this regard it's honestly pretty bad. We can't follow this ideal with everything, there are some permissions that are impossible to fake. But in general, we don't want to make things too easy for malicious developers.
> If I want file access permissions on Android, I don’t just try to open a file with the Java File API and expect it to magically prompt the user. I have to call Android-specific APIs to request permissions first. iOS is the same. So why shouldn’t I be able to just call flatpak_request_permission(PERMISSION) and get a callback when the user approves or declines?
In this regard, Android and iOS are wrong. Not as a matter of opinion, I will make a medium-to-high confidence claim that they are just flat-out approaching sandboxing security incorrectly. It's not their fault, I wouldn't have done a better job. This is something that we've become more aware of as we've seen how sandboxing has evolved, and as it's become more obvious how apps try to circumvent sandboxes. And this could be a longer conversation, yes there are tradeoffs, but the benefits of this approach far outweigh them, and I think in the future that more platforms (including iOS/Android) are likely to move towards seamless permissions that are hidden from applications themselves.
Remember we sandbox applications in part because we don't trust developers. And while it's not the only reason for hiding user controls from apps, it is a good enough reason on its own. Don't give attackers unnecessary information if you can help it, this is security 101.
And the author is right, this approach is genuinely more brittle because it requires building mocked/sandboxed API access rather than just throwing an error. But hey, sandboxing itself is more brittle. We deal with it for the UX/safety improvements.
---
Second, for the reason that the author hints at:
> Fedora is auto-converting all of their rpm apps to Flatpak. In order for this to work, they need the Flatpak permission system and Flatpak in general to require no app changes whatsoever.
There are substantial advantages to having sandboxes that work with existing apps. This is always a give-and-take process, but in general we don't want to force updates of a large portion of user-facing apps. And we want them to be easy to get into Flatpak.
The core libraries, user portals, distros themselves -- it is a pain to update them, but they have more eyes on them, they are more likely to be updated securely, and they have more resources available to them. I think it would be a mistake to shift that burden entirely onto application developers. Linux has a habit of doing this kind of thing sometimes, and it's a bad habit. We want Flatpak (or any sandboxing system) to be something that can be wrapped around applications without a lot of work. Ideally, a 3rd-party maintainer might even be able to bundle an app themselves.
There is also a sort of future-proofing built into this, which is that there is no such thing as a sandboxing system that's secure from day 1. We've seen this pop up on the web, which (for all of its general criticism about expanding capabilities) still has a much smaller attack surface than most native apps. Web standard developers are very careful, but there are still breaking changes sometimes when new security measures/permissions need to be added.
It would be very good if (as much as is possible), introducing additional new privileges and sandboxing capabilities to Flatpak did not require code updates for every single app using those capabilities that was built for an older version of Flatpak. If it's at all possible to avoid that scenario, avoiding it is the correct move to make.
And finally, touching on burdens for maintainers again, different people may prefer different permission systems. If at all possible, we want to avoid forcing individual app developers to fragment their codebase and maintain different branches for different security sandboxes. Flatpak should run the "base" version of your app with little to no changes. This also benefits users, because fragmentation and forcing users to install multiple simultaneous sandboxes on their machine is heckin awful.
So for all of those reasons, minimizing code changes for individual apps is a really good goal to have, even if it admittedly makes Flatpak more complicated and a bit harder on core platforms like GTK.
----
Other criticisms, whatever, I have some thoughts but it's not important for me to give them.
But the sandboxing criticisms I see in this article betray (to me) a lack of understanding about what the current problems are with phone/web security and what the next "generation" of sandboxing problems are that we're going to face. And I think that Flatpak is at the very least approaching sandboxing through a more mature/modern lens than the one we were using X years ago in early smart phones.
I'd argue they never did. Hard drives have always been slow and when they were the primary means of storage on the PC, people were always trying to find ways to speed things up. Now it takes seconds to boot rather than minutes.
"Laptop manufacturers are switching to smaller flash drives to improve performance while preserving margins. Budget laptops circa 2015 shipped with 256 GB or larger mechanical drives. Now in 2021 they ship with 120 GB flash. "
This is outdated information. I just checked Lenovo's store and while they do have a super low end machine at $200 with 64Gb of EMMC, their real budget model laptops start with a 256Gb SSD. The standard ThinkPad we order this year for users had a 1Tb SSD. Is it budget? No. But building towards the lowest common denominator is rarely worthwhile without good reason.
"Chromebooks are even smaller as they push everything onto cloud storage. Smartphones are starting to run full-fledged Linux distributions. The Raspberry Pi 4 and 400 use an SD card as root device and have such fantastic performance that we’re on the verge of a revolution in low-cost computing. "
Chromebooks and RPi's are not PCs. Yes, some enthusiasts use them that way but that is a niche and done by people who know what they are doing. They can easily enough avoid flatpacks.
Linux is a big place and its versatility allows for designing distros around multiple use cases. Flatpack and Snaps are meant to make software distribution easier. Constraints such as storage are not a primary concern for most users. Most machines have enough. Their mere existence does not mean other methods of deploying software aren't available so if storage is a constraint, then simply don't use them.
A lot of the arguments against Flatpack and Snap seem to be based around "it's not good for my usecase" ok. Then don't use it.
"Each app with a new runtime adds another hundred megs or more of RAM usage. This adds up fast. Most computers don’t have enough RAM to run all their apps with alternate runtimes. The Raspberry Pi 400 has only 4 GB of RAM. Low-end Chromebooks have only 2 GB. Budget laptops tend to have 8 GB, mostly thanks to the bloat of Windows 10, but these app packaging solutions are catching up to it."
My case in point. Still referencing niche machines. Most RPi's are used for a dedicated purpose. 99% of Chromebooks never exist Google's ecosystem for it. And their concept for what qualifies as a lot of memory is, I think, out of touch. 8GB is the base for an x86 PC. Not a Chromebook but a real PC. Many have 16 gigs and its not uncommon to see up to 32 in laptops and even more in workstations.
"Why shouldn’t storage shrink anyway? Software should be getting more efficient, not less."
My knee jerk response is "why?". My more thoughtful one is that efficiency can be measured in many ways. Resource usage becomes inefficient when the user determines it is. And everyone has a different idea of where that lies. Nobody is multitasking more than a couple of applications at any one time on a PC. Servers are different but Servers aren't an intended use case for Flatpack and Snaps. If I can run the software I want at acceptable performance then its resource consumption is efficient enough. But you know what Flatpacks are more efficient with? Time. I don't have to deal with dependencies. I just tell it to install and it works. Given how awful the application management experience is in Linux traditionally, I am willing to take the downsides for that massive upside.
"Such an app can drop a malware executable anywhere in your home folder and add a line to your ~/.profile or a desktop entry to ~/.config/autostart/ to have it auto-started on your next login. Not only will it run outside of any container, it will even persist after the app is uninstalled."
I think he has a point with security. Permissions can be vague and misleading. And they do have a supply chain attack vulnerability. But that's true of any software you get from the repository, store etc. If the official Fedora's official Flatpacks can have malware then so can their official Repo.
His section about identifies clashes is not an inherent problem with Flatpack but rather a procedural problem with Fedora and Flathub. I think its weird he brought it up when he started the article saying he wouldn't include easily solvable issues. Yet here we are.
"You would think that these packaging mechanisms would embrace simplicity if they want to attract software developers. In fact they are doing the opposite. "
This is the efficiency problem again. Define simplicity. What is simple in one area is complex in another. The fact is nothing as complex as software on a computer can be simple everywhere. If it were it would be useless. So the question is not how to make it simple. It is where are you placing your complexity. Traditional package management places the complexity on the user while keeping the software simple. Flatpacks place the simplicity on the UX while making the software complex. None is inherently superior. Its all about design goals.
"All of these app packaging systems require that the user have some service installed on their PC before any packages can be installed."
And? All software has dependencies and a lot requires some kind of run time. Does he also consider all Java software problematic because they require a JVM first? What about all web based applications requiring a compliant browser? This is a non issue in context of how everything else works in the 21st century.
"A major goal of most of these technologies is to support an “app store” experience: Docker Hub, Flathub, the Steam Store, Snapcraft, and AppImageHub (but not AppImageHub?) These technologies are all designed around this model because the owners want a cut of sales revenue or fees for enterprise distribution. (Flathub only says they don’t process payments at present. It’s coming.)"
Conceptually there is no difference between an app store and repository. They are the same thing. App stores are just repos that are more user friendly.
"This is very far from the traditional Windows experience of just downloading an installer, clicking Next a few times, and having your app installed with complete desktop integration. This is true freedom. There are no requirements, no other steps, no hoops to jump through to install an app. This is why the Windows Store and to some extent even the macOS App Store are failing. They can’t compete with the freedom their own platforms provide."
I didn't expect him to advocate for the windows model. And I actually agree. Windows handles application management the best. However a lot of the criticisms he gives Flatpack would apply to windows too though. You are guaranteed to end up with multiple copies of certain dependencies because Microsoft's answer to dependency hell was to let developers ship their dependencies with their product and not have to care what everyone else had. This is extremely space inefficient and can make for ugly under the hood management but it works very well. The author misses that in his article because he just measures the size of an installer, often Windows software ships not as one file but as many and the install.exe merely orchestrates the process. Again, its all about where you place your complexity. In Windows world the complexity is rarely encountered. Apps get to bring their baggage with them and decide where it all goes. This means program files can get ugly and inconsistent and you may have to learn the behavior or individual software but it usually works the first time.
"The Current State of Backwards Compatibility"
I have news for you. Linux's biggest problem here isn't that new revisions break compatibility. Its that old versions can become unavailable. The repo model of software distribution lends itself to "link rot" Load up an older version of Ubuntu and it cant talk to the repos anymore. The servers are gone. And since until recently almost all software was distributed this way and managed by a package manager you effectively can't get software working in old Linux. Software preservation becomes monumental if not impossible. With older windows versions, if I have the install media whatever form it may take then I can install and use the software because it shipped with all of its dependencies included. No nebulous server required. DRM notwithstanding of course.
The fundamental issue I find with all these distribution systems is that they always seem more designed to justify a computer science degree rather than solving the problem in a user friendly way. And that applies to debs and rpms as well. All the solutions seem so over-engineered to the point that only engineers have a hope to get any kind of consistent usable experience out of it. Certainly you can try to hide all that complexity behind a nice GUI with icons and layouts, but that only works so well until something goes wrong. And then the user is stuck juggling a broken system and waiting for a reply on a support forum that may never come. That also mirrors my views on why Linux has yet to break into the mainstream desktop market.
But lets go even further, another fundamental issue. Sometimes it feels like these systems are working to justify a basic design choice of Linux that's overstayed its welcome. The idea to separate libraries and binaries was a fabulously beautiful engineering idea to fix a problem that stopped being an issue a decade ago. When your system has 32mb of storage, it makes sense, but less so when you can buy a 2tb solid state drive for $120 on Amazon. Counter to the argument in the article, storage is cheap and becoming cheaper at such a rate that continuing this philosophy will appear more odd by the day. And now we've officially come full circle and engineered it away with flatpak and appimage.
In an ideal world, these perfect systems should work beautifully. But users aren't perfect and neither are developers. That's why I now value software not by the novelty of how it solves a problem but in how few parts it can do it. For all Elon Musk's personal issues, he was letter perfect about one thing. The best system is no system. The best process is no process. For every additional step of complexity you add when solving a problem, you gain three additional problems. 1. You have to maintain it across the half-lifes of developer interests. 2. You have to teach users and developers how to use it the right way, a nearly impossible task. 3. All the systems that rely on it become more complex, duplicating problems 1 and 2 ad nauseam.
For me part of the answer was missed 12 years ago with AppImage. It works so well that I find myself breathing a sigh of relief whenever I find an application I want that uses it. That's because I know it'll just work. It's a testament to the usability of AppImages that it's still in use years later even without support from major distros. I think the linux community at large is at risk of missing something very good there. At least for user facing programs, it works wonderfully.