I think there’s value in a system that technical users could understand from top to bottom. This is what attracts me to small systems like Minix, Scheme, and Standard ML, just to name a few. However, I’m curious about how the complexity of big systems can be tamed. A system that is hard to reason about even with the source code being accessible is more expensive to maintain and to modify.
By removing the misfeatures that make layers of abstraction seem necessary. Linux has already gone thru this several times with things like the transition from HAL to udev, or the potential simplifications falling out of well written copy on write file systems (gefs is one I particularly have my eye on here). The main things which dishearten me in this regard are, respectively, the complexity of GPUs (and thus their supporting infrastructure in software), and what is needed to support the OS-within-an-OS that is the modern web browser.
Some people might say Linux has grown up but for me it's off-putting. I don't want my OS to be for everyone because my wishes are pretty different than most people's.
I moved to FreeBSD for my desktop and I'm really happy for it. It's simple and has some unique insights like the ports collection, jails and ZFS as a true first class citizen. Of course to each their own and I'm glad for some Linux-driven initiatives like KDE. I'm happy some people like Linux and I still use it here and there too, like for premade docker containers.
Linux CAN be for everyone; that's precisely why so many different distros exist, to focus on different groups of users who have different needs and preferences.
It would be nice if ZFS was a true first-class citizen in Linux-land though.
So, yes, Debian is far more complicated than I need or want. I have pondered on building my personal Linux system, using say Alpine or Void Linux, but that is work that seems unnecessary to me. Better off just to use Debian as it is.
And, by the way, this bloat isn't just due to the distros. I build Emacs from source, and you should see the bizarre dependency set it needs, many of which have nothing to do with anything I use Emacs for. I have both GTK and Qt libraries on my systems. TeX Live's distribution media would fill up 20 IBM 2314 disk storage devices (circa 1970), each containing 8 washing-machine-sized disk drives (plus a spare). Install an application, and you might find yourself installing (different versions of) Perl, Python, and/or Ruby. It goes on and on.
I have felt for a long time that dependency management is one of the big unsolved problems of software engineering. It's not surprising that the resulting systems have the appearance (and texture) of big balls of mud.
In my experience, people don't write up lists of dependencies, especially since some are recursive. Instead, they look at the available tools and libraries, determining which are useful for their purposes.
I'm not picking on Emacs (though its earlier icon as an overflowing kitchen sink gives a clue). This is a phenomenon of lots of software. If I want to use Asciidoctor-pdf, I must install Ruby, for example. The very fact that we have a wide variety of tools, languages, and libraries guarantees this sort of explosion. I'm as guilty as anyone of this, as my preferred language is Scheme. So if you want my programs, you have to install a compatible Scheme system.
My real point here is that, given our understanding of dependencies, this sort of explosion is inevitable. So if I had a perfect, svelte Linux system, it would start picking up cruft almost from the moment it was created. That's just where we're at.
But the difference to windows and macos is that you can know whats going on.
With looking at the documentation, man pages, looking at the configuration files and maybe even reading source code. If you really want to know what exactly happens, you will know on a FOSS linux system.
With the closed source systems like windows and macos, you'll probably never know 100% of whats going on.
With linux you are also free to create an installation from scratch where you exactly know every piece because you wrote every configuration file for it etc. This is much work, but otherwise you will need to use magic (like the author described it) programs, which do most of the heavy lifting for you.
I think a lot of this is because there's been an effort to provide more QoL functionality out of the box in dsitros and a lot of the features are inherently complex things with complex code. You can understand it, but understanding each individual piece takes a lot more time than it used to because each piece is solving for a wider use case.
The audio stack overhaul is a good example of one that's positive. PipeWire is actually game changing in terms of UX and how well _it just works_ to the point that there's no longer any kludging of asoundrc or any other configs and praying that an audio device you want to use works as expected.
There seems to be a tendency (for better or for worse) in Linux communities to want to deal with these inherently complex things by ignoring the complexity and taking the stance that building your own solution to those problems as they arise is better, but it really feels like a gatekeeping stance. Maybe it's a worse is better situation, or there's some level in the middle, but the current state of Linux is far better than it's ever been imo.
In my observation over several decades, documentation in general has gotten really bad, or non-existent, all across industry. It's everywhere: no one wants to write stuff any more.
But, I also think that modern OSs have somewhat outgrown their users. I'm not sure I actually need a multi-user OS, nor secure signing of various things. Though if I were to use Haiku and get what I asked for, maybe I'll get exploited and regret it somehow. Yet the likelihood seems so low versus the complexity involved to protect me.
Anyway, for those who think Linux is growing to have more than needed, you'll always have the distros that refuse to change, like Slackware. But, I think saying the old linux was better is nostalgia, since Slackware really isn't fun to use.
> Granted there are other Linux distributions like Gentoo, Alpine, Void, NixOS, etc. that are still conservative in some of these regards... However, these are not the popular ones, thus not the ones where the most development effort goes into.
Has this ever not been true of hardcore/transparent "Linux"? Yes Ubuntu is trying to be a free Windows-like, yes Gnome is pretty locked down. But alternatives still exist, it's just they're unpopular (and specifically for the reasons the author prefers them). Yes the big names get most of the development attention, but in terms of overall hours contributed are the smaller names actually getting less as an absolute number? Or just a percentage?
> In fact, the overall dumbed-down and closed-up -- sorry I meant to say "optimized for the enjoyment of our customers" -- Apple OSX seems to be a better alternative...
This seems to be meant as snark, but isn't this kind of the core tradeoff driving this whole dynamic? Sure simple and transparent software can be fun and useful, but much of the modern computing stack is simply too complicated to be genuinely transparent. Simplifying the bootloader won't make WebRTC simpler for an amateur to understand.
The thing is, you can easily do all that stuff right now on Linux, even if you're a casual computer user. Unlike 15 years ago, ALL that stuff is now done inside a web browser, so the OS is really nothing more than a platform to run Chrome (or better yet, Firefox). Any idiot who knows how to use a PC can start a web browser, and from there it's no different whether you're on MacOS, Windows, or Linux.
And Linux has the huge advantage that it won't suddenly do a forced update and reboot your computer while you're doing a presentation at work.
But also, if you've only used one operating system, there hasn't been any need to generalize what you've learned from that system interaction wise. We had to learn a bunch of computing conventions. Eventually you learn things like "most command line tools have -v,--verbose". Grandpa doesn't think computers work this way, he thinks Windows works this way and how should he know how Linux works.
'What games are like for someone who doesn't play games' is a pretty good video about a similar thing. It's easy for you, you know computing vernacular.
I don't, and have never, wanted this -- exactly because of what doing that means for the direction of development for the OS.
Then got to the part where they mentioned this is targeting more mainstream distros and thought “ah so this is why I chose gentoo.” I never considered any of the mentioned mainstream distros because I just assumed given the popularity, there’s more fuckery. Less granular control == more “user friendly” which seems like a natural byproduct of something becoming more popular.
I certainly don’t have the historical perspective of the author, and one can can debate the merits of the myriad running processes, but just wanted to share the perspective of someone entirely green to Linux; that what was outlined was what I already assumed
>However, [Free/Open BSD] have shortcomings of their own, mainly being so conservative that they've almost been left behind by the overall open-source community.
I feel like the author doesn't understand that these are connected…
It's a modern Linux, closely follows upstream projects, and when you wonder how to set up some network configuration or something else you just open the script in question and it's mostly obvious.
It's a small community so hasn't got the same amount of eyeballs and might not always be as quick with security fixes as the bigger projects, but it might be a bit of stale, old, yet somehow fresh air for the author.
I can't even use my large MSI 1440p monitor with this computer because Linux (also perhaps because of the actual gfx card too, to be fair). I had to manually install Discord's update yesterday and now there are 2 Discord apps on my computer because Linux (and only one works).
The Snap packaging system also includes a layer of software that can sandbox the packages it installed. The idea is, a single package is less distro-specific and also is limited in the damage it can do. Ubuntu is the main user of Snap. Many other distributions, especially Fedora and the like seem to lean more towards Flatpak, which is a different take on what Snap does. Yes, it is complicated but the idea is to increase the security and portability of software packages for Linux. To manage Snap packages from the command line, you can use the snap command.
That being said, I probably initially installed it via Snap--but then, shouldn't it have auto-updated? Yet the Snap store version is still stuck on v 0.28, and the Discord client is what insisted I download the .deb for 0.29. :shrug:
I do not think either of the above routes is too onerous or complicated, but I think the situation is made more complicated by the multiple options. If I am understanding things correctly, APT is a third option in addition to Snap and downloading .deb directly--actually, there is a fourth, which is a variety of things we can lump together under "execute an an installation script"
you should remove them by doing `sudo pacman -Rns discord` followed by trying yay -R discord.
Then install it from either the AUR with yay or Arch repo with pacman.
Edit: actually, I think I installed it via Snap Store. And the current Snap Store version is .28, but the new update is .29. So the .28 client is just in a loop of "there is a new update and I am out of date, so now I am just a modal that downloads the packages at these URLs". Not sure where my other installation came from, if I somehow did a fresh install or what.
I don't really get why there isn't just one obvious reliable and centralized way to install software on this thing. My Slack, VSCode, and other apps auto-update without my having to manually do anything, which, IMO, is how it should be by default.
I'm on Ubuntu--is there benefit to using Pacman on there? My understanding is that people do.
This part makes no sense. Linux works with any monitor; monitors connect via DP/HDMI and don't need device drivers. I have two 1440p monitors on my system. If you're having a problem here, it's undoubtedly the graphics card, and Linux has long had issues with drivers there (mainly Nvidia), but even here most Nvidia users say it works fine for them.
Starting with the obvious, dpi detection is mostly non-existing and it seems the default text subhinting configurations aren't actually good for high dpi monitors - so you either get shitty fonts and graphics with good screen real estate or FHD experience with acceptable quality. And then you have to choose - do you want color management or different dpi settings per monitor? Because you can't have both - Wayland doesn't seem to have proper color management yet, and X/Xorg doesn't have different scaling settings per monitor.
Did I mention Wayland supports different dpi settings per monitor? Well sometimes it gets confused, and doesn't work well. Getting my kubuntu (I know, running kde doesn't help) to work with both my FHD and QHD monitor in an acceptable dpi setting took several hours, and forced me to switch from XOrg to wayland. Now instead of a robust desktop, I have a machine that needs to be rebooted every week because it starts forgetting to update screen regions - imagine a youtube video playing, but you only see the first frame.
ridiculous, Is it also fords fault I cant use an f150 because I live on a mountain without roads? You may have purchased hardware that has explicitly chosen to make life difficult for anything but windows, that is hardly "because linux". Nobody is even asking THEM to make drivers, which allthough would be nice, everyone would settle for "tell us how to use the hardware", or at best "dont actively prevent us from making drivers".
And why can you not use the monitor?
Your discord problems are almost certainly caused by you doing it wrong. Linux is not a magic solver of problems, you know your way around windows, but if you invested same time into learning linux as you did windows, you'd know how to do things there aswell.
> Your discord problems are almost certainly caused by you doing it wrong. Linux is not a magic solver of problems, you know your way around windows, but if you invested same time into learning linux as you did windows, you'd know how to do things there aswell.
No bro, it requires zero investment of time to "know your way around windows" for basic stuff like installing consumer programs--that's the thing. Not sure how you're missing that. I don't think I've ever installed anything incorrectly on Windows. I'm not sure that's even possible.
The issue is not that I expect Linux to be a "magic solver of problems"--it's that there are a lot of problems it has which are ridiculous in the first place and don't exist on other operating systems.
I was immediately entranced by the simplicity and elegance of Unix. I got a sense of gigantic systems humming beneath (or above) the terminal room, because this was an OS capable of scaling massively. Yet it allowed the end-user to piece together simple building blocks, in shell scripts, pipelines, and C or Pascal programs.
So I grew and adapted to Unix, and I lived through the heady proprietary days with Sun Microsystems, HP and DEC. Then I saw that bubble burst as Linux and the Wintel architecture supplanted them and dominated the market. I used all sorts of GUIs, from OpenWindows, CDE, Cygwin, etc.
I ran Unix or Linux at home between 1991-2022. I loved to tinker! If a gearhead always has his hotrod up on blocks in the driveway, I always had the cover off my computer and I was poking it in some fashion. I loved to play with software and configuration, and play sysadmin to my home lab. Until a few years ago, this was OK.
However, my days of tinkering came to an end. My days of wanting to self-host services ended. I became even more of an end-user than a Power User, and as of 3 years ago, I needed systems that work and are supported. I would tinker no longer.
So I said farewell to Linux. Now don't get me wrong. I fully support Linux in all its forms, from Android to the data center to hyperscale supercomputers. I just don't find a space for Linux or Unix at home anymore. Thanks for all the lovely years.
Linux and osx are as well still not complex at all, what have changed the most over the years are firmware/hardware and drivers and security related items like addition of sensors, IcS for encryption and etc that is indeed very opaque by nature.
Also, it was never “OSX”, like the article uses, but so many people call it that.
maacOS (in addition to bad capitalisation) could refer to an older and completely different OS.
Meybe not in this context, but it in general clearer is better.
But look which ones are the actual most popular distro as voted by People Who Choose Linux on DW https://distrowatch.com/dwres-mobile.php?resource=ranking
Imho Void Linux is BSD With Linux kernel and the perfect option for the author
> Hardware and Software Change Dynamically
> Modern systems (especially general purpose OS) are highly dynamic in their configuration and use: they are mobile, different applications are started and stopped, different hardware added and removed again. An init system that is responsible for maintaining services needs to listen to hardware and software changes. It needs to dynamically start (and sometimes stop) services as they are needed to run a program or enable some hardware.
I’m astounded at the customization that’s still possible. Gnome-tweaks still works on Ubuntu 23.04. I moved the activities bar to the bottom and merged it with the dock (some extension). I changed the fonts to Helvetic’s everywhere in the system ui — just gnome tweaks.
Only annoyance was that I had to create a symlink to make the Firefox snap see the fonts I had installed.
It’s pretty cool that hidpi even works. I now think things like dbus are intrinsic to the problem of IPC for GUIs.
——
I can see why snap exists. Its worth it to make it hard for a random ‘curl | sudo bash’ from breaking important programs.
But yes it’s ridiculous that I can’t update Firefox snap while it’s still open. And that it won’t auto update if I just close Firefox after getting the pop up about needing an update and wait a few seconds. I’m optimistic that will happen.
Even in 22.04 I could entirely disable the dock using the same tools to disable third-party extensions.
——
I’m a huge systemd hater. But I recently noticed it has something to run an arbitrary one-off program as a unit with all of the isolation/logging facilities that provides. It’s a pretty great from a tech perspective.
So I’ve softened my criticism somewhat. :)
——
It’s amazing how compatible the various distributions still are.
I’ve concluded these problems are not easy. The complexity in modern Linux is probably fine within a constant factor (analogy with time complexity — I’ll edit to clarify).
If we didn’t have multiple daemons etc we would have a single giant “unified” system like systemd for gui also — like windows.
—-
Finally, I think if we really want a simple gui stack we need to:
1. Get rid of graphical login and go back to VT login + startx.
2. User switching by alt-ctrl-f2 + startx.
3. Screen lock performed by actually logging out using the gui equivalent of tmux. (Jwz’s post linked here was educational.)
——
Thanks for reading so far. And THANK YOU Ubuntu and Red Hat for keeping the flame still burning.
PS: one final pet peeve. Why the heck does Ubuntu think it’s ok to half-ass booting other OS’s? Was super impressed that Oracle Linux (RHEL clone) actually listed all my OS’s and booted all of them after install. My hat is off to you.
Agreed, it really could use some kind of staging installation that it can quickly switch to.
> And that it won’t auto update if I just close Firefox after getting the pop up about needing an update and wait a few seconds. I’m optimistic that will happen.
This does seem to happen now. I'll get a notification that something has an update ready, I'll close it, and a few moments later a notification that the update has been installed and I can click that notification to relaunch it.
It is still very far from Windows though, that is really a bad tag line..
On Windows, firmware loads bootloader, bootloader loads ntoskrnl (NT kernel), ntoskrnl starts first user process smss.exe, smss starts security subsystem lsass.exe, lsass starts winlogon.exe, winlogon logs user on and starts the shell, explorer.exe
On Linux, bootloader loads vmlinuz (kernel), vmlinuz starts first user process (init), init starts display manager (eg gdm), gdm logs user on and starts another copy of the X server and launches user's favorite window manager.
These have essentially stayed the same since the first NT 3.1 release and Red hat 0.9. I'm not familiar with Mac OS X or other Unices but I imagine they aren't that different. Modern desktop OS design has essentially converged to a point where there is no fundamental difference between any modern desktop OSes, because after all they are all trying to solve more or less the exact same set of problems. When a user inserts a flash drive he expects it to be mounted automatically (you can't honestly expect most end users to type in a sudo command to access his flash drive, can you), and this in fact requires careful coordinations from a ton of system components to implement securely, because mounting a filesystem is a privileged operation (you certainly don't want the end user to be able to unmount the root filesystem do you, but he should be able to unmount his flash drive). This is why things like polkit are necessary. People in Linux land love to complain about dbus --- oh it's turning Linux into the big evil M$ Windowz --- but how do you propose to solve the problem of two talking processes having to send objects between each other? With plan9's 9p? How about authentication? How do I make sure the process I'm talking to is really the process I think it is? If you try to solve all these problems, you end up with something that's essentially no different from dbus.
> Granted there are other Linux distributions like Gentoo, Alpine, Void, NixOS, etc. that are still conservative in some of these regards... However, these are not the popular ones, thus not the ones where the most development effort goes into.
NixOS has _by far_ the most software packaged and the most packaging activity. moreover, postmarketOS is likely the most widely-deployed distro for FLOSS mobile phones, and it's a downstream of Alpine. so i just disagree with this statement. but back to the proliferation of userspace services/interfaces, because IMO that's what's enabled these frontiers:
any project can ship a systemd service and/or a dbus file declaring which interfaces it implements. now your OS doesn't have to do anything special to connect the different components. your chat client announces "i'd like to speak to an <org.freedesktop.Notifications> implementation please", dbus says "oh, i've got a file here saying that dunst can do that for you, let me launch its service", and 100ms later a chat bubble notification hits your display.
now you hear about a new notification handler, "SwayNotificationCenter". uninstall dunst, install SwayNC, and now the notifications have fancy inline replies, a panel that lets you mute specific applications, which you can open from the tray icon, and so on. actually, it's new enough software that your OS doesn't package it. so you write a PKGBUILD, or APKBUILD, or nix file for it. because the systemd/dbus descriptions are shipped by the package, your package script is all of 10 LoC. easy enough to open a PR against your distro for that, it takes all of 5 minutes for the maintainers to review something this small and standardized.
now that it's in a distro, it gets more traction. your distro (or a relative of it) decides they'd like to make SwayNC the default notification daemon for their users. but they want it to be consistent with the rest of the desktop, so they'll have to theme it a bit. peeking at the dependencies, they see it uses gtk, so they launch it with `GTK_DEBUG=interactive swaync` to view the document tree and modify the component styles until it fits in. SwayNC didn't have to do anything special to do that, maybe their devs didn't even know that feature existed -- they got it for free just by using a standard toolkit.
now i hope the author might understand why dbus/systemd/wayland are appealing to distros and packagers. there's a long chain between upstream/developer and downstream/user. i figure the author spends more time near the latter than the former. perhaps someday they'll experience the full cycle: maybe they'll find no music player meets their needs and decide to author their own, watch as it gets adopted into different downstreams without any action on their part, enjoy unexpected bug fixes and improvements that their users send back upstream, and walk away with a broader understanding of the different tradeoffs at play here.