It takes years to cultivate a garden, but only minutes to destroy it.
[1]: https://www.zdnet.com/article/linux-mint-dumps-ubuntu-snap/
We know redhat is just as bad at forcing people to use their software (systemd...) but they have to key points which ubuntu lacks:
- they usually win their software wars
- their software usually works well enough and they give you great support
- they mantain their projects and live with their decisions, unlike ubuntu which flips and flops with major updates (I'm waiting for them to switch from netplan to something else again and fuck up all my ansible config again, which was already battle tested)
- they have better management tools overall.
The only reason we went with ubuntu in the first place was because it was familiar to all of us (we had all ran it on our desktops). When they take that familiarity away, then they lose their only real advantage.
Or maybe Shuttleworth doesn't care at all and he just wants the big bucks from MS.
It's always the same, you hear people complaining the most.
Ubuntu has been a success because they took some risk.
The first one has been to make installing proprietary drivers easy. Something they got a lot of heat for.
And since they managed to become the most popular Linux distro ever, I think critics should maybe ask themselves why.
Launching Chrome: I click the Chrome icon.
Launching PrusaSlicer: Start a terminal and type
chmod 755 ~/Downloads/PrusaSlicer-2.2.0+linux-x64-202003211856.AppImage
~/Downloads/PrusaSlicer-2.2.0+linux-x64-202003211856.AppImage
That doesn't seem like progress to me from a UX perspective. $ sudo install ~/Downloads/PrusaSlicer[tab] /usr/local/bin/PrusaSlicer
$ PrusaSlicerMy experience is completely different. I spent 7-8 years using linux on a laptop about 4 of those using either ubuntu or derivatives, my experience was that after about 6 months it was time to reinstall the OS.
Since I have installed fedora and it has been the most stable and resilient system I have ever used, I have also been treating it badly as an experiment (like powering off randomly if my 50 reddit tabs where causing too much lag) and it has no problems at all.
Currently I am using mostly my office laptop with debian and it has the same issues as ubuntu.
From my point of view I cannot understand why fedora is not more popular.
https://launchpad.net/~saiarcot895/+archive/ubuntu/chromium-...
It's available on Ubuntu 20.04.
Once the regular Chromium had a higher version number than the Chromium in the PPA. aptitude updated to the regular Chromium. I then gave the PPA Chromium a priority of 1000 by creating a file called /etc/apt/preferences.d/saiarcot895-chromium-beta with the following content:
Package: *chromium*
Pin: release o=LP-PPA-saiarcot895-chromium-beta
Pin-Priority: 1000More seriously, this is not an unexpected response if you've been using a system for 10 years or more. If you were new to the system you would just say "oh, interesting, this is how it does self contained packages" but your perspective of having a system you understand well, has worked for all your needs, and you know all the ways in which to work around it if you can't get exactly what you want works against your perception of a new feature.
In my experience it is the leading cause of burnout in engineers. You learn things, you use things, you customize them to your needs, and then the 'new participants' who don't have any experience and find those things "arcane" or "opaque" re-implement them for themselves, their friends, their company whatever. And then its something new and something new gets the exposure so still more people see the 'new' thing without even knowing there was an 'old' thing and its just "the way this feature is done."
As an experienced person it is tiring and bothersome to have to re-implement tool flows, capabilities, and other parts of your environment because some youngster re-invented the wheel yet again and you were not in a place to educate them on why the existing wheel was just fine.
The longer you live the more cycles you go through and the more ridiculous each new re-imagining of how to do 'X' becomes until all you seem to do is complain about how in the previous versions everything worked fine and this new stuff is crap and you aren't going to put up with it.
At which point ageism kicks in and your employer lays you off with mumblings about "not a team player" or "resistant to learning new skills" as if sharpening a knife with a round stone is any different or any better than sharpening a knife with a square one. It is easy to get bitter. It is easy to just roll over and whine with your fellow "oldsters" about the "good old days". It is also a kind of death.
Counter intuitively, I suspect that if companies invested in keeping the status quo engineering salaries would go down. That would result from skills learned as a junior engineer always being relevant to the current environment but increasingly more efficiently applied (as it typical as people get more experienced, they do things more quickly). That minimizes the number of people you need to develop your products and that keeps the number of engineers you need to employ down, so your costs go down and the poor engineers who aren't currently working have to compete more aggressively for available entry jobs by taking a lower salary. Fortunately, because it is counter intuitive I don't think there is any risk of it coming to pass.
In my opinion, the elves have left Middle Earth. Ubuntu and Ubuntu's current cohort of developer/users are more interested in an open source version of Windows than anything else. As a result more and more "windows like" architecture and features are replacing the old "UNIX like" architecture and features.
I'm really conservative about software and I want to keep my apt, dammit it! I did like Unity, but only because I was never too attached to Gnome.
----
[1] I still remember when KDE devs broke that desktop for me, I think it was KDE4? They decided you just couldn't place desktop icons -- "you're doing it wrong", foreshadoing Steve Jobs -- and there was much gnashing of teeth, and I and many others ragequit to Ubuntu. Little did we know, of course :P
The AUR is work of genius.
I still build all my containers at work with Ubuntu though.
However I can't agree with this:
> apt/deb is a wonderful package management system and everyone is happy with it, at least the majority of Ubuntu/Debian users. Besides, dnf/rpm is also a similar packaging system for Fedora/RH systems and everyone is happy with that too.
Debs and rpms are great at assembling tightly coupled monolithic systems. Great! Let's keep using them for the base system. However when I want to install a QT app on a Gnome system or gasp a proprietary app, Debs are insufficient. I want all of the QT libs embedded in the package. I want the proprietary app in a container. I want MAC with a polished UX. I don't want debs to worry about those features. I want an "app store" done right: open yet verifiable. Protection in depth.
- a user-space install option
- rollback functionality (!)
- being able to install multiple versions at the same time and switch between them
- if I really need to: being able to install the latest version (and even an unstable release); if that means that apt-get has to download and compile stuff, then I'm ok with that.
I don't understand why anybody wants this.
Libraries should have major versions and the latest of each major version should be compatible with anything using that major version, because that's what major version means for a library. You might then need to have more than one major version of the library installed, but any two applications using the same one should be able to use the same copy, and then have it maintained by one person in one place.
If every package has a separate copy of every library, people have to maintain them all separately. When that library has a security update, you now have to update five dozen packages instead of one or two, and have a security vulnerability if any of the maintainers don't keep up in a timely fashion. Which not all of them will.
> I want the proprietary app in a container.
People want containers to be magic but they're actually a hard problem. You want the app not to be able to do anything you don't want it to but still be able to do everything you do want it to.
A backup app that can't read my files is useless; it can't back them up. But it shouldn't be able to modify or delete them. But it should be able to modify its own state. It shouldn't have general network access but should be able to communicate with the backup server, which might have to be specified by the user and not the package maintainer. It doesn't need access to the GPU or the ability to use gigabytes of memory, but it does need to be able to transfer a lot of data over the network, but the data it transfers is lower priority than other network packets.
That requires the person configuring the app's container to have both detailed knowledge of the app and detailed knowledge of the container system. It's common for this not to be the case.
And that's why containers are a mess, not anything to do with the package manager, which should have little to do with the container system outside of packaging the app's default container configuration with the app.
Because creating debs is largely a completely distinct undertaking from the dependency and build management the developer of an app does.
Bundles, whether via images or static binaries, allow app developers to distribute their app against the exact dependencies it was developed against -- potentially using the same build system.
There's obviously tradeoffs to each approach, which is why I don't think there's one right way to distribute every bit of executable code on a system.
> People want containers to be magic but they're actually a hard problem.
I work on a container orchestrator, so I understand some of the difficulty. :) Mobile apps are years ahead of desktop apps when it comes to containerizing in a user friendly way. Obviously there's plenty of work still to be done, but the problem is far from intractable and the benefits are enormous.
Should, but accidental breaking changes are a thing. Plus flatpack more or less solves this by having standard runtimes (base collections of libraries/dependencies that flatpack apps target) that get security updates.
> That requires the person configuring the app's container to have both detailed knowledge of the app and detailed knowledge of the container system. It's common for this not to be the case.
With Snap, developers explicitly ask for the permissions they need and the approval process evaluates if it makes sense for that app to have those permissions (and by default or not).
People want containers to be magic but they're actually a hard problem. You want the app not to be able to do anything you don't want it to but still be able to do everything you do want it to.
As I see it the problem that containerization in snaps and similar solution is the isolation of system configuration.
I agree that permissions are an hard problem and honestly I am not sure how much they are relevant for snaps, but what is theory is feasible is that installing a snap could be completely and reversible.
I believe that is true of flatpack at least.
Yes. I think most people (most people don't run Linux in any form) would like to think of their system as having a collection of independent applictions, not a set of libraries.
If people are expected to "maintain", or even understand, the concept of shared libraries, then I think the system is only geared to power users and tinkerers (the current user base more or less).
Snap is a way to contain / scope this kind of scripted activity. This is a welcome change. Additionally, deb/apt has much worse transaction support than yum and its successors (you can simply roll back yum transactions, good luck rolling back a borked APT system where package maintainer scripts already have done unspeakable things to the system and you're kind of stuck). APT's configuration system is also arcane and badly documented; debhelpers that control how most packages are built and work are tens of thousands of lines of perl, python, C++, C, makefile and m4 code that somehow work but are in no way a way to build straightforward predictable packages. It's ultra flexible, but also ultra complex. The trend in package / release management is going towards simplification, and not complication. A stop gap solution were many projects that allow the generation of Debian packages from venvs, random directory trees (for which you could also use the deprecated old-style DEBIAN package format and pass it to dpkg-build without the arcane dpkg-* toolchain, but again, Debian claims these this kind of packaging is not "well formed", and who knows when it gets removed).
Snaps are just a different way to do integrated containerized applications with scoped config management, versioning and a release system on top (which makes the difference). Perhaps somebody could make a better solution. Meanwhile, RedHat introduced AppStreams -- probably Canonical also felt it needed an answer to that.
Ubuntu's exposure to their "universe" repository component, many packages in which are badly maintained and are not a great part of the release QA, is also a huge risk as an enterprise distribution (where the money is) so it is no surprise that Canonical is looking to decouple their core offering on the server platform from that and maybe at some point remove it from the base install altogether ("take your own risk") and then drop it.
We have come up with such convoluted solutions to this problem—Docker, nix, Snap, etc—when the simple option is sitting in front of us. And it works! On my Mac, I don't want to install homebrew or MacPorts—Package managers make me feel like I never know the state of my system—so when I need a command line tool, I try to either track down or otherwise compile (in a VM) a static binary. When I can find them, they work perfectly!
But yeah, snaps is a great solution for that, espacially because getting a non FOSS software is not possible in official distro repos, and hosting your own repo is HARD.
And that's what's happening. I installed Telegram using a snap because of this.
Sure, this might be true for the average user, but it is toxic to the “super user” community that’s in the best position to help support the larger community and may end up pushing them away.
Snap at the very least should have an opt-out feature, if not be opt-in during an install.
More criticisms maybe found here:
https://en.wikipedia.org/wiki/Snap_(package_manager)#Critici...
To be fair, that has always been Ubuntu's target market...
You can always use only the system package manager, or use a distro that doesn't use snap.
All those complaints feel so moot.
It's really hard to be in FOSS nowaday: you can't make a move without your users judging you all along the way, because a lot of them are idealists that expect a lot from you, yet don't think about the non tech saavy users.
It's way easier to make proprietary software: most of your users don't criticise any single decision you make, you don't have to justify yourself, you get much more users, and you make money out of it.
Why on Earth would you think that users of proprietary software don't criticize it? I'm pretty sure that Windows gets more criticism than Ubuntu...
For those unfamiliar, Debian releases come out about once every two years, at which point all software in Debian's repositories is frozen at its current version. Software receives security updates between releases, but nothing else.
I personally think this is wonderful, and I would absolutely use Debian if I was interested in switching to Linux (which I'm not, at the moment). Constant change is inherently frustrating, even when the changes themselves are a net positive (they often aren't). Debian's approach provides a level of reliability and consistency that is sorely lacking in most modern software.
So, while I also recommend Debian, I do so only if you too agree with the above paragraph.
Beyond that, increasingly common to see Ubuntu used in enterprise and the dev tasked with dealing with the issue may not have the authority to decide to use Ubuntu or not.
[1]https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1643706
> If you wish to install the traditional deb package, it is available as usual via APT, with all security and critical bug fixes. However, there will be no major VLC version updates until the next Ubuntu release.
This is in line with Ubuntu (and Debian) repo policies. You do not get major software updates in between distribution updates unless you use a third party repository. You do get bug fixes and security fixes, and/or can track unstable if you need the bleeding edge.
You can try using soft links as a work around.
A hard link should be able to see the file, but they need to be in the same file system, and I don't know much about ubuntu's default partition scheme to say if it's doable for most of the users.
Sorry, snaps are a LOT slower than just running a binary. Did I said they're slow, well they're slow.
I have an SSD and it feels like it's 1992 and I'm trying to run some snap from a Cyrix without cache and 16MB of RAM. I switch to binary version (oh my chromium), and it freaking flash, 0.x sec. and you're there, the full app is available.
Snaps are a NO GO my friends.
Besides having LOTS of problems running out of the standard GUI (Gnome3), or even in the standard (supposedly heavy-tested) GUI, they are slow.
Sorry, I've already said that uh? SLOW, that's snaps.
If there is somebody from Ubuntu here, please take a serious look about how snapped apps (pun intended), read/write $HOME defaults.
I mean we have to have defaults somewhere. So thingies like the colour theme, the theme engine, default download path, etc. are fully followed just as the user has configured them.
I use Ubuntu, but I certainly would not be using in the future if my applications which now take merely 0.x seconds to open start to take, 3-4-15! seconds to open. I fact I started to look to Debian and Fedora, they currently appear to have saner defaults than Ubuntu.
No, the second time I open an app in a session doesn't count AT ALL for the speed.
Here are some problems I've had:
- snaps use a different directory than our main app. So if you install our debian package, then go to a snap package, all your data seems to vanish. It's just in another hidden directory. I tried to figure out how to get the directories to sync up but couldn't get it to work as it's yet 'another thing to support'. I only have so much time.
- snaps have various bugs that you encounter after you've shipped the app that aren't present at build time. Mostly due to being in a container and 'reasonable' things not being accessible and needing to be granted access to via a configuration file.
The strategy I'm thinking of migrating to is to just distribute as a .deb and have our own apt line that is installed during the .deb installation. I think this is what Slack and other Electron packages have migrated to which is easier for them to support.
I mean conceptually it sounds great. Put your apps in a container. They will be isolated. Great. But in practice it's a nightmare.
To be fair though. MacOS has similar issues when they started going with isolation and privileges.
I think the main issue is that none of the OS maintainers spend a day in the shoes of a package maintainer. And if they did they don't care because they own the OS and many of these apps compete with your core product.
At least you have plausible deniability that your behavior isn't anti-competitive - you're just trying to improve the security for the user!
For example, Zoom got a ton of crap about their installer but they compete with Facetime which DOES NOT have to constantly ask the user for privileges. Apple granted Facetime these privileges via the OS.
From the perspective of a user, it's horrible.
"Can this app access your Downloads folder?"
"Can this app access your Webcam?"
"Can this app access your Microphone?"
"Can this app access your Documents folder?"
... and on and on ad nausea.
Why a proprietary backend though? I suppose cannonical views packaged apps as a platform opportunity and wants to be the first to “capture” the users without somebody bigger coming and taking over?
Exactly. Personally I have been sticking my desktop programs into "firejail"-managed "containers" for a long time. It's a good thing that a similar solution has been implemented that is suitable to bring this too the masses.
Couldn't you just make a "default-firejail" package that installs the symlinks to firejail somewhere that's before the default install install location in your path? And maybe consider installing that in the base install, but that potentially risks breaking things unexpectedly for users.
> it is plain weird for every app I install to have so much file system and system access
A quick glance at Wikipedia to make sure I'm not talking out of my ass seems to confirm that:
> is a Linux kernel security module that allows the system administrator to restrict programs' capabilities with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths. [...] AppArmor is enabled by default in Debian 10 (Buster) [from July 2019].
(Also, I'm not a fan of claiming "devil's advocate" when you're saying something that you know everyone will agree with. It's similar to saying "downvote me all you want but [insert popular HN opinion]". Of course the principle of lease privilege for software is something lauded by every logically thinking person.)
For whatever reason it's decided that I'm not allowed to connect to the network.
It's easy enough to remove the package, but it likes to tag along as a dependency when installing updates.
Unfortunately, snap comes with all of these extra issues that happen when the developer isn't empathizing with the user. Also, much to my chagrin, snaps don't actually uninstall cleanly, and can really hoop your system. I now install snaps INSIDE of an LXC container so that snap can't misbehave and break my system, or else if I can I just use apt with a custom repo (for docker because the snap is awful). Ubuntu 20.04 will probably be my last Ubuntu system, and that's a shame... I really liked it.
This reduces the maintenance burden, while also allowing you to have up to date packages.
The way this is done is by bundling all dependencies in your snap package rather than using the system ones.
I think it's a great idea for applications you want to be updated frequently, like VS Code, Chrome, etc.
It's not perfect, e.g. back end is closed source, but I'm glad Ubuntu is giving us options and being at the forefront of package management.
I ended up settling on Silverblue, a Fedora derivative that uses an immutable base system along with Flatpak for applications, and it’s been great. Equally trouble free, and Flatpak has many of the benefits of Snaps without some of the downsides (fully open source).
Those arguments are always from very tech saavy people.
They are a good exemple of purity over practicality, completly ignoring all the problems apt/yum have for the average user or dev publishing software.
If you are looking for reasons as why Linux on the destkop never happen, well, this is one of them.
I wasn't keen on using a derivative distro of a derivative distro; thought of using Debian but wasn't very confident if the issues won't persist on it. Considering everything just works perfectly with Pop!_os, I might just stick with it.
Ubuntu was such a breath of fresh air at the beginning (I still remember Dapper Drake). It is sad to see it going this way.
Supposedly they're working on tabbed and stacking layouts for tiled windows- if that is the case I don't know if I'll go back to i3 or sway again!
I tried their tiling option but didn't stick to it because when I maximized a window and toggled to another, the original window went back to being tiled. Too lazy to remember shortcuts for tiling WMs (probably to my own detriment)
Yes, some snaps are not updated, and VLC only working in the home directory as another HN commenter said is a pain.
Snaps still solve a lot of issues on Linux that Windows does not have.
What happens if the new version of your editor breaks some well known plugins?
On Windows you just install the older version from the archives. On Linux you risk dependency issues. Snaps are a single command to switch to an older version. Very important in both the software and media space.
One option is to stay on the LTS version of Blender.
Another is back when I used Atom, one upgrade completely broke a popular plugin for me because of a change in Chrome. I thought it would be fixed quickly so didn't revert the package and I lost the older version in the cache. Took the Atom devs 2 weeks to handle the change to Google Chrome-then Arch maintainer of the Atom package didn't get around to upgrading it for a week or two after that.
Things like apt pinning really don't help you when you discover the issue on your main workstation. Also, doesn't change the fact that it is so much easier to revert on Windows. Snaps make version control even easier to do on Linux then Windows.
The back-end being proprietary is a good argument. but for a company that has worked so long with Linux and Free Software; give Canonical a little trust to release it.
I can understand the "proprietary store" concerns of this thread, however.
Take a multi user system and users who run applications like chromium or firefox.
It's dangerous to upgrade the application while users are running them as the files the running applications depend on can change thereby making them either break in weird ways or force the end users to restart them.
if these apps were just distributed as snaps, it wouldn't matter. they would keep on using the old image without any problem, while new executions would get the new image. If one really wanted to encourage them to exit and restart (i.e. some security hole), the same mechanisms that exist today to get people to restart could be used.
with that said, I think it should be a choice, not something force down our throat. if I install something with apt/dpkg , I expect it to be an apt/dpkg package, not a snap. if I want to use snap, I'll install it with snap.
They have wide spread adoption with almost 10x the install base of Flatpaks.
Do you guys really need to keep throwing blogs at something which isn't going away and is useful to users? How is this useful in anyway? Canonical isn't suddenly going to give up on this, and I don't even want them to.
This feels very natural to what Apple, Google and Microsoft do on their OS. But Canonical seems to have forgotten that such behavior is what drove a lot of people to Linux. It is never going to be accepted. Nor it should be.
Personally I find snaps a very disappointing solution to a very interesting problem.
Do you really want the answer to that...
There are people who would loudly insist that there should be nothing but a kernel and Stallman's FAQ.
Literally nobody says that. Not even RMS says that.
Kudos to Linux Mint for taking a stand in regards to it, and hopefully more Ubuntu-based distros do the same until Canonical gets the idea.
I don't think they're out there trying to be malicious, but they need to be set back to the correct path as it has been the case with odd monetization choices for Ubuntu before that didn't really benefit anyone.
I had seen some discussion before about the server not being open source but I can clearly see the store api there. I'm just looking at the code now and haven't taken any time in testing it out for myself yet.
Regarding debian packages and apt as we have seen with the amount of PPAs, there really has to be some solution for that. I like the snap format I believe it's heading in the right direction. It still has some problems with desktop software but that seems to be addressed as it has progressed.
There is this strange cold war between Red Hat and Canonical and Red Had seem to have most of the NIH problems if you look at the history. I really don't think Red Hat or some of it's developers possible like relying on code from Canonical.
The privilege escalation attack looks a bit worrying, but aside from that it looks either the same/better as/then native packaging (Sandbox for some apps), or something that is on the developer (Out of date packages).
It's possible I'm misunderstanding something, if so please feel free to tell me so.
As long as they are limited to applications (i.e. not OS components) and look reasonably like normal applications.
Why? Because you can't ask developers to package their apps for 100+ distros. And they let us run latest versions of apps on an otherwise stable/old OS.
Which two? I mean, a package is only as good as its curator. That sounds more like a complaint about Ubuntu's Snap update policy or the community support for whatever you were trying to install than anything about Snap.
I don't understand the fuss here. People who want control over the open source software they install still have it, it still works like it always has. People who don't care can't even tell the difference.
And in a handful of situations, like large apps with extensive dependencies, or externally managed builds, or needs for cross-distro binary compatibility, Snap has real and tangible advantages.
> And in a handful of situations, like large apps with extensive dependencies, or externally managed builds, or needs for cross-distro binary compatibility, Snap has real and tangible advantages.
I see that snaps can offer these advantages, but so can AppImage and FlatPak, and these are even more cross-platform and don't come with the same limited ecosystem.
Its all open source code so you can download and install yourself so I don't buy the argument about it being against the GNU philosophy.
Apt and snaps solve different problems. The only argument I can see here is the one about the back-end which is old and tired.
What kind of learning curve should one expect if migrating from debian/ubuntu based distributions?
Its a very, very different sort of distro, but if you know basic function programming and are willing to do some "unlearning", you should be fine.
Plus, the desktop is becoming less and less relevant every year.
I spend easily 10 hours a day probably at a traditional computer. My wife spends an equivalent amount of time I'd guess in front of a screen, but it's her phone.
I switched to Fedora Server. The packages are much fresher, the kernel is fresher. It is a “staging” area for Red Hat, which for me is a plus.
Goodbye Ubuntu. I’ve been using you as my primary Linux distro since version 6.06!
Bye bye Ubuntu if not.
GitHub is proprietary. I understand that some people don't accept that either. But if you accept and use GitHub, then you should have no problem with snaps on this basis.
Also, on this topic, consider this quote[1]:
"We did that experiment with Launchpad; the people who said they wouldn’t use it because it wasn’t open source were the same people promoting a closed source alternative. When we open sourced Launchpad, they said that they wouldn’t use it anyway because Canonical was the primary contributor."
I am not saying that Flatpak is proprietary. I am saying that the focus on the backend not having source available is specious.
> Developer controls the updates.
Users CAN defer updates (eg. because they're on a metered connection, or they only want to update on Patch Tuesdays, or whatever). However, in the default arrangement they cannot defer them indefinitely. But in today's Internet-connected world, refusing updates forever is also anti-social and unacceptable, so I don't see a great loss there. However you can manually install a snap such that it never updates[2].
> APT does a fantastic job as it is.
No, it doesn't. It is fantastic for distribution releases that don't change their dependency structure after release. It's terrible for shipping new software to an existing distribution release. This is seen both in packages that must have major updates frequently (eg. Firefox, which added a whole new Rust toolchain dependency that had to be backported into existing stable distribution releases). It's also seen in various third party apt repositories that ship software to users that break their systems by causing future upgrade issues because they mess with distribution-provided dependencies in a way that future distribution package updates do not know about. apt/deb also provides no application sandboxing for third party software that you might trust less than your distribution. If as a developer you've ever tried to ship software to users as deb/apt, you would know this. Complaints about it are all over the Internet and this has been the consensus for many years.
> Don't shove it down our throats, make it optional at least.
It already is optional. You can remove snapd and pin snapd in apt to a negative score to never have it installed again[3]. Chromium won't be available to you as a distribution-provided deb in Ubuntu 20.04 then, but nor is it in Mint.
[1] https://forum.snapcraft.io/t/linux-mint-20-disables-deb2snap...
[2] https://forum.snapcraft.io/t/disabling-automatic-refresh-for...
[3] http://manpages.ubuntu.com/manpages/focal/en/man5/apt_prefer...
If you don't like the Snap Store, well, get coding and be prepared to fork the client also.
Given the amount of OSS projects submitting packages to winget on github it's pretty safe to say that most projects don't care if a web service is OSS or not. That boat sailed like 10y ago.
> refusing updates forever is also anti-social and unacceptable
Woah, what?
What about Flatpak or AppImage?
It is not really optional if you have to specifically prevent Snap from being installed again.
Which is a website and has nothing to do with software you install on a workstation.
Can you elaborate?
Maybe it's a thing for linux on the desktop, but my time isn't worthless so I don't do linux on the desktop.