I like Debian's measured pragmatism with ideology, how it's a distro of free software by default but it also makes it easy to install non-free software or firmware blobs. I like Debian's package guidelines, I like dpkg, I like the Debian documentation even if Arch remains the best on that front. I like the stable/testing package streams, which make it easy to choose old but rock-stable vs just a bit old and almost as stable.
And one of the best parts is, I've never had a Debian system break without it being my fault in some way. Every case I've had of Debian being outright unbootable or having other serious problems, it's been due to me trying to add things from third-party repositories, or messing up the configuration or something else, but not a fault of the Debian system itself.
Debian is great but I can't say this is a shared experience. In particular, I've been bitten by Debian's heavy patching of kernel in Debian stable (specifically, backport regressions in the fast-moving DRM subsystem leading to hard-to-debug crashes), despite Debian releases technically having the "same" kernel for a duration of a release. In contrast, Ubuntu just uses newer kernels and -hwe avoids a lot of patch friction. So I still use Debian VMs but Ubuntu on bare metal. I haven't tried kernel from debian-backports repos though.
Needs citation.
Debian stable uses upstream LTS kernels and I'm not aware of any heavy patching they do on top of that.
Upstream -stable trees are very relaxed in patches they accept and unfortunately they don't get serious testing before being released either (you can see there's a new release in every -stable tree like every week), so that's probably what you've been bit by.
I’ve thought about (ab)using a Proxmox repository on an otherwise stock Debian system before just for the kernel…
echo exit 101 > /usr/sbin/policy-rc.d
chmod +x /usr/sbin/policy-rc.d
I think this is the recommended way to avoid autostarting services on Debian.You're not trying hard enough ;-)
I have Debian on an old MacBook Pro and had it on an even older iMac, and I've had a few problems over the years. Always with proprietary drivers - WiFi, graphics, webcams, etc. - Apple really don't want people using free software on their hardware. There's always been a fix, but there have been a few stressful moments and hoops to jump through.
But it's definitely my favorite distro, and I run it everywhere I can. Pretty much always "just works" anywhere but Apple.
In what way Ubuntu went downhill?
I had a few issues caused by Ubuntu that weren't upstream. One was Tracker somehow eating up lots of CPU power and slowing the system down. Another was with input methods, I need to type in a pretty rare language and that was just broken on Ubuntu one day. Not upstream.
The bigger problem was Ubuntu adding stuff before it was ready. The Unity desktop, which is now fine, was initially missing lots of basic features and wasn't a good experience. Then there was the short-lived but pretty disastrous attempt to replace Xorg with Mir.
My non-tech parents are still on Ubuntu, have been for some twenty years, and it's mostly fine there. I wouldn't recommend it if you know your way around a Linux system but for non-tech, Ubuntu works well. Still, just a few months ago I was astonished by another Ubuntu change. My mom's most important program is Thunderbird, with her long-running email archive. The Thunderbird profile has effortlessly moved across several PCs as it's just a copy of the folder. Suddenly, Ubuntu migrated to the snap version of Thunderbird, so after a software update she found herself with a new version and an empty profile. Because of course the new profile is somewhere under ~/snap and the update didn't in any way try to link to the old profile.
Then there were stupid things like Amazon search results in the Unity dash search when looking for your files or programs. Nah. Ubuntu isn't terrible by any means but for a number of years now, I'd recommend Linux Mint as the friendly Debian derivative.
I think it’s worth keeping that in mind with all the hate Ubuntu gets. Most users are just silently getting their work done on an LTS they update every two years.
This made it more fragile. It was really nice in the late 2000s, but gradually became worse.
snap, lxd (not lxc!), mir, upstart, ufw.
It's neverending, and it's always failing.
In the early days it had a differing and usually better aligned release schedule for the critical graphics stack.
As a function of time, you are increasingly likely to get rug pulled once Shuttleworth decides to collect his next ransom.
The two things I can remember were problems with NFS out of the box (outside having to install nfs-common, which I'm fine with) and apt-cache not displaying descriptions of packages. There were lots of other, minor annoyances that affected people like me but wouldn't affect someone who got into Linux desktops after, say, 2010. My memory sucks though so those are the two I remember. Yes, there were bug reports filed and yes, they sat in the tracker for years with no attention from Ubuntu.
I wound up back on Debian once I got old enough that I didn't care about being behind the times a couple years.
That's curious, because when I was learning to make Debian packages, I found the official documentation to be far better than I had seen from any other distro. The Policy Manual in particular is very detailed, continually improving, and even documents incremental changes from each version to the next. (That last bit makes it easy for package maintainers to keep up with current best practices.)
Does Arch have something better in this department?
Are you perhaps comparing the Arch wiki to Debian's wiki? On that front I would agree with you.
I still use Debian but it's hard to forget stuff like that even after all these years.
There is plenty that could be said of Debian but as far as I’m concerned that’s not part of it.
Debian patches software for purely ideological reasons because they think they are not free enough. That’s not pragmatism. That’s the reverse of pragmatism. It certainly is a real drag on the teams developing the software they try to ship.
My experience has been contrary to that. I'm a Linux user of 25+ years with various distros but about half of that time with Debian as my main desktop. I broke up with Debian about ten years ago thinking we could still be friends, but every time I've tried to put it on a new box it since then something weird has happened, most recently about a month ago on a completely new Intel N150, when it gave me some stick about video modes. Today my laptop got hosed by an attempted upgrade from bookworm to trixie, as in tons of error messages and then no more docker and no more virtualbox. No harm done because Debian taught me long ago to store a copy of the whole root filesystem on external media before an upgrade, but now the clock is ticking until I have to migrate off it or get stuck with something too old to be compatible with anything.
https://blog.kronis.dev/blog/debian-updates-are-broken
https://blog.kronis.dev/blog/debian-and-grub-are-broken
Then again, I’ve had most software occasionally break, I’m thankful that Debian exists.
I learned nftables with Bookworm and labwc with Trixie.
labwc supports Wayland with Openbox configuration.
So here is what I _don't_ like about Debian :-)
- I don't like Debian package tooling (dpkg, debootstrap, de build...). Actually I hate everything about the experience of Debian packaging. Every time I package for Debian, I end up with a messed up setup of chroots and have to make triple sure nothing leaked from my environment.
- Debian has a habit of repackaging everything at their own sauce, disregarding upstream philosophy. Debian packages will have their own microcosm of configuration directories, defaults, paths, etc. orthogonal to what a pristine installation look like.
- Debian has the annoying habit of default starting installed services. So you always have to dance around your configuration management to disable services, install them, configure them, then restart them.
On a personal note, Trixie is very exciting for me because my side project, ntfy [1], was packaged [2] and is now included in Trixie. I only learned about the fact that it was included very late in cycle when the package maintainer asked for license clarifications. As a result the Debian-ized version of ntfy doesn't contain a web app (which is a reaaal bummer), and has a few things "patched out" (which is fine). I approached the maintainer and just recently added build tags [3] to make it easier to remove Stripe, Firebase and WebPush, so that the next Debian-ized version will not have to contain (so many) awkward patches.
As an "upstream maintainer", I must say it isn't obvious at all why the web app wasn't included. It was clearly removed on purpose [4], but I don't really know what to do to get it into the next Debian release. Doing an "apt install ntfy" is going to be quite disappointing for most if the web app doesn't work. Any help or guidance is very welcome!
[1] https://github.com/binwiederhier/ntfy
[2] https://tracker.debian.org/pkg/ntfy
[3] https://github.com/binwiederhier/ntfy/pull/1420
[4] https://salsa.debian.org/ahmadkhalifa/ntfy/-/blob/debian/lat...
> The webapp is a nodejs app that requires packages that are not currently in debian.
Since vendoring dependencies inside packages is frowned upon in Debian, the maintainer would have needed to add those packages themselves and maintain them. My guess is that they didn't want to take on that effort.
Woah. Shouldn’t Node and Golang be in Debian’s official repos by now?
Debian sources need to be sufficient to build. So for npm projects, you usually have a debian-specific package.json where each npm dependency (transitively, including devDependencies needed for build) needs to either be replaced with its equivalent debian package (which may also need to be ported), vendored (usually less ideal, especially for third-party code), or removed. Oh, and enjoy aligning versions for all of that. That's doable but non-trivial work with such a sizable lockfile. If I would guess the maintainer couldn't justify the extra effort and taking on combing through all those packages.
I also think in either case the Debian way would probably be to split it out as a complementary ntfy-web package.
My advise to you is to deny all support from people using the Debian version of your software and automatically close all bug tickets from Debian saying you don’t support externally patched software.
You would be far from the first to do so and it’s a completely rational and sane decision. You don’t have to engage with the insanity that Debian own policies force on its maintainers and users.
> The ntfy image is available for amd64, armv6, armv7 and arm64. It should be pretty straight forward to use.
Debian has been the stable footing of my Free computing life for three decades. Everything about their approach — from showing me Condorcet, organising stable chaos, moving forward by measured consensus, and basing everything on hard wrought principles — has had an effect on me in some way, from technical to social and back again.
I love this project and the immeasurable impact it has had on the world through their releases and culture.
With all my love, g’o xx
Impressive that i386 support made it all the way to August 2025. I have Debian 10 Buster running on a Pentium 3 which only EOL'd last year in June 2024. It's still useful on that hardware and I'm grateful support continued as long as it did!
OpenBSD still supports i386 for those looking for a modern OS on old 32-bit hardware.
I am not happy about unnecessary ewaste, but an i386 almost certainly has and order of magnitude less horsepower than a raspberry pi or N100.
$ curl -s http://deb.debian.org/debian/dists/trixie/main/binary-amd64/Packages.gz | zgrep ^Package: | wc -l
68737
$ curl -s http://deb.debian.org/debian/dists/trixie/main/binary-i386/Packages.gz | zgrep ^Package: | wc -l
66958If that's all there's to it, you can still use debootstrap, compile a kernel, and point the root parameter to your shiny new install.
If the official i386 arch was built with instructions that your hardware doesn't support, tough cookies.
True i386 support would mean compatible with the original Intel 386 processor from 1985. The 486 added a few additional instructions in 1989 but things really changed with the Pentium in 1993 - that gave us i586 which is the bare minimum for most modern software today. Much software can still run on regular Pentiums today if compiled for it, but SSE2 optimizations requires at least a Pentium 4 or Core CPUs instead.
I play with retro PCs often and found OpenBSD's i386 target stopped supporting real 386 CPUs after the 4.1 release, and dropped support for i486 somewhat recently in 6.8. It now requires at least a Pentium class CPU, i586, though the arch is still referred to as i386 likely because it's a common proxy for "32-bit".
It was a bit of a strange decision since there were undoubtedly more 386, 486, and Pentium users than some of the platforms Linux continued to support, but that's the choice they made. But they weren't alone. Even NetBSD requires a 486DX or better.
However, that doesn't stop one from installing a Pentium Overdrive in an old Socket 3 board and running the latest release. ;)
From my build box:
chroot $MOUNTPOINT/ /bin/bash -c "http_proxy=$aptproxy apt-get -y --purge --allow remove-essential install sysvinit-core sysvinit-utils systemd-sysv- systemd-"
There is a weird depends you cannot get around without simultaneously removing and installing in parallel. A Debian bug highlighted the above, with a "-" for systemd-sysv- systemd- as a fix, along with allow remove essential.After this fix, sysvinit builds with debootstrap were almost identical as to bookworm. This includes for desktops.
As per with bookworm through buster, you'll still need something like this too:
$ cat /etc/apt/preferences.d/systemd
# this is the only systemd package that is required, so we up its priority first...
Package: libsystemd0
Pin: release trixie
Pin-Priority: 700
# exclude the rest
Package: systemd
Pin: release *
Pin-Priority: -1
Package: *systemd*
Pin: release *
Pin-Priority: -1
Package: systemd:i386
Pin: release *
Pin-Priority: -1
Package: systemd:amd64
Pin: release *
Pin-Priority: -1* https://jdebp.uk/Softwares/nosh/guide/services/systemd-login...
I run a full desktop too, without it. Multiple variants.
I don't use gnome's Desktop Environment though (although I do run gtk/gnome software), so cannot comment on that.
See below:
APT is moving to a different format for configuring where it downloads packages from. The files /etc/apt/sources.list and *.list files in /etc/apt/sources.list.d/ are replaced by files still in that directory but with names ending in .sources, using the new, more readable (deb822 style) format. For details see sources.list(5). Examples of APT configurations in these notes will be given in the new deb822 format.
If your system is using multiple sources files then you will need to ensure they stay consistent.
- https://wiki.debian.org/SourcesList#APT_sources_format- https://www.debian.org/releases/trixie/release-notes/upgradi...
"apt modernize-sources" command can be used to simulate and replace ".list" files with the new ".sources" format.
Modernizing will replace .list files with the new .sources format, add Signed-By values where they can be determined automatically, and save the old files into .list.bak files.
This command supports the 'signed-by' and 'trusted' options. If you have specified other options inside [] brackets, please transfer them manually to the output files; see sources.list(5) for a mapping.Oh nifty, I hand converted all mine a couple years back. It would have been nice to have that then (or know about it?). I do really like the new deb822 format, having the gpg key inline is nice. I do hope that once this is out there the folks with custom public apt repos will start giving out .sources files directly. Should be more straightforward than all the apt-key junk one used to have to do (especially when a key rotated).
[0]: https://docs.ansible.com/ansible/latest/collections/ansible/...
[0]: https://manpages.debian.org/buster/apt/sources.list.5.en.htm...
Congratulations to the team--phenomenal work!
Alternative to parsing the reproduce web site :)
On multiple runs, malloc gives out different addresses (thanks to threads or security concerns) which means things end up in different slots in the table. Then you iterate through it in memory order and you're seeing objects in non-deterministic order, which you do things with.
Embedding file paths / timestamps / git shas and similar was popular for a while too and unhelpful for reproducible builds.
> 5.2.2. systemd message: System is tainted: unmerged-bin systemd upstream, since version 256, considers systems having separate /usr/bin and /usr/sbin directories noteworthy. At startup systemd emits a message to record this fact: System is tainted: unmerged-bin. It is recommended to ignore this message. Merging these directories manually is unsupported and will break future upgrades. Further details can be found in bug #1085370.
No option to disable this either, per discussion in https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1085370
http://0pointer.net/blog/projects/stateless.html
https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor...
That said, my knee-jerk is also that this is about strong-arming distros. Which leaves a bad taste in my mouth. I’d be interested to hear other viewpoints though.
The discussion in that bug is that the Debian maintainer (and upstream dev) is open to an upstream patch to add such an option.
Wild interpretation right here.
There are only 2 realistic choices: Leave it as is or patch out the warning message in the Debian package
Debian maintainer is clearly deflecting the responsibility here because everyone knows very well that upstream wouldn't accept such a patch.
As it's already explained in the bug report, since Debian has no plan to do that migration in the near future, aforementioned warning isn't only useless and annoying, it's also potentially harmful, thus the correct action would be to remove it downstream like they did it in xscreensaver (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=819703#84)
But then they'd face the wrath of Lennart, so the only choice left is ignoring the report.
Because they said so:
> As part of that we sometimes adopt schemes that were previously used by only one of the distributions and push it to a level where it's the default of systemd, trying to gently push everybody towards the same set of basic configuration [1]
Also, like it or not, the word "tainted" bears a negative connotation. It's hard to imagine that it was chosen arbitrarily.
Once I get the hypervisor systemd-free (no systemd on FreeBSD), I can then install a minimal distro in a VM mean to do containerization (like, say, the Talos Linux distro for K8s, that only has a few executables and they're all immutable) and then I can run containers that, by design, have something that is precisely not systemd as PID1.
So life is good: there's a systemd-free world at the end of the tunnel.
Did you consider Devuan? Or is this just taking one annoyance as motivation to fix others at the same time?
Minimal: https://cdimage.debian.org/debian-cd/current/amd64/bt-cd/deb...
Full: https://cdimage.debian.org/debian-cd/current/amd64/bt-dvd/de...
https://www.debian.org/releases/trixie/release-notes/issues....
>"You can return to /tmp being a regular directory by running systemctl mask tmp.mount as root and rebooting."
I kind of wish the distros had decided on a new /tmpfs (or /tmp/tmpfs, etc) directory for applications to opt-in to using ram-disk rather than replacing /tmp and having to opt-out.
The problem with /tmp was many people and apps used it as an inter-user communication medium and expected persistency there, so it created both security problems and wasted disk space over time.
Since not many packaged apps used the /tmp like that and used the folder the way it should be used, the change was made.
I'm running Debian testing on one of my systems, and the change created no ill effects whatsoever. Not eating SSD write cycles can be considered a plus, even.
However, as I also noted in the relevant thread, the approach might have a couple of downsides in some scenarios.
If you have the time and the desire, discussion starts at https://lists.debian.org/debian-devel/2024/05/msg00014.html
This too can be turned off.
"trixie"
64-bit PC (amd64),
64-bit ARM (arm64),
ARM EABI (armel),
ARMv7 (EABI hard-float ABI, armhf),
64-bit little-endian PowerPC (ppc64el),
64-bit little-endian RISC-V (riscv64),
IBM System z (s390x)
It's good to see RISC-V becoming a first-class citizen, despite the general lack of hardware using it at the moment.I do wonder, where are PowerPC and IBM System z being used these days? Are there modern Linux systems being deployed with something other than amd64, arm64, and (soon?) riscv64?
From a developer perspective, s390x is also the last active big-endian architecture (I guess there's SPARC as well, but that's on life support and Oracle doesn't care about anyone running anything but Solaris on it), so it's useful for picking up endianness bugs.
Another interesting thing is that the only two 32-bit architectures left supported are armel and armhf. Debian has already announced that this will be the last release that supports armel (https://www.debian.org/releases/trixie/release-notes/issues....), so I guess it'll be a matter of time before they drop 32-bit support altogether.
Source: https://itic-corp.com/itic-2023-reliability-survey-ibm-z-res...
The legitimate usage is typically for workloads like relational databases (which are anyway single-machine architectures) that experience heavy load 24/7, part of fragile architectures that cannot tolerate downtime (remaining fragile as such because the original source code has been lost etc.), where "the system is down" causes tens of thousands if not hundreds of thousands of people to stop work.
them being kept by major distros is therefore not as "natural" as other architectures
IBM.
And they own redhat, so I imagine they put a lot of time and money into making the kernel work.
Why Debian in particular, not sure.
The end of an era.
The thing I like most about Debian is that you need to know at least a little about what is going on to use it. For me, it does a good job of following "as simple as possible and no simpler."
My first Debian install was in 1996. I had no real idea what I was doing, but it was amazing to me that I could remote-display windows from machines across campus, and it was alien compared to the windows 3.x/95 I was used to at that point. There was no apt at that point, or none that I was aware of, and adding new stuff was painful.
I started using debian preferentially as my workstation/desktop OS in about 2005, and was installing it on embedded systems (linksys nslu2) to make micro servers by … etch I think it was.
By 2008 I was at IBM and they allowed a choice of windows or redhat on your laptop, and if you were adventurous there was experimental support for Ubuntu which might work on Debian. I made it work and discovered that among 330k people there were 22 of us running it!
Always loved it, it always just made more sense than other distros somehow. My daily driver is a Mac now, but I still have a few Debian machines around.
Fortunately, bookworm will continue to receive updates for almost 3 years, so I am not in a hurry to look for a new OS for these relics. OpenBSD looks like the natural successor, but I am not sure if the wifi chips are supported. (And who knows how long these netbooks will continue to work, they were built in 2008 and 2009, so they've had a long life already.)
EDIT: Hooray, thanks to everyone who made this possible, is what I meant to say.
My ~/.config/mpv/config:
#inicio
ytdl-format=bestvideo[height<=?480][fps<=?30]+bestaudio/best
vo=gl
audio-pitch-correction=no
quiet=yes
pause=no
vd-lavc-skiploopfilter=all
demuxer-cache-wait=yes
demuxer-max-bytes=4MiB
#fin
My ~/yt-dlp.conf #inicio de fichero
--format=bestvideo[height<=?480][fps<=?30]+bestaudio/best
#fin de fichero
For the rest, I use streamlink from virtualenv (I do the same with yt-dlp) with a wrapper
at $HOME/bin:yt-dlp wrapper
#!/bin/sh
. $HOME/src/yt-dlp/bin/activate
$HOME/src/yt-dlp/bin/yt-dlp "$@"
streamlink wrapper #!/bin/sh
. $HOME/src/streamlink/bin/activate
$HOME/src/streamlink/bin/yt-dlp "$@"
To install streamlink mkdir -p ~/src/streamlink
cd ~/src/streamlink
virtualenv .
. bin/activate
pip3 install -U streamlink
The same with yt-dlp: mkdir -p ~/src/yt-dlp
cd ~/src/yt-dlp
virtualenv .
. bin/activate
pip3 install -U yt-dlp
On the rest, I use mutt+msmtp+mbsync, slrn, sfeed, lynx/links, mocp, mupdf for PDF/CBZ/EPUB,
nsxiv for images, tut for Mastodon and Emacs just for Telegram (I installed tdlib from OpenBSD
packages and then I installed Telega from MELPA).Overall it's a really fast machine. CWM+XTerm+Tmux it's my main environment. I have some SSH connection open to somewhere else at the 3rd tag (virtual desktop), and the 2nd one for Dillo.
So nothing critical. But something they are still good at, and being very small makes them a natural fit for these use cases.
https://www.debian.org/releases/trixie/release-notes/issues....
# example:
udevadm test-builtin net_setup_link /sys/class/net/eno4 2>/dev/null
ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
ID_NET_LINK_FILE_DROPINS=
ID_NET_NAME=eno4 <-- note the NIC name that will happen after reboot
Here's a one-liner, excluding a bond interface and lo. Gives a nice list of pre and post change. for x in $(cat /etc/network/interfaces | grep auto | cut -d ' ' -f 2 | grep -Ev 'lo|bond0'); do echo -n $x:; udevadm test-builtin net_setup_link /sys/class/net/$x 2>/dev/null | grep NET_NAME| cut -d = -f 2; done
The doc's logic is that after you've upgraded to trixie, and before reboot, you're running enough of systemd to see what it will name interfaces after reboot.So far I have not had an interface change due to upgrade, so I cannot say that the above does detect it.
enoX should always stay stable, as it's the BIOS (in some ACPI table) telling that this device/port has this ID.
ensX means the NIC in PCIe slot X, but in your PCIe tree you can have PCIe bridges, so technically you could have multiple NIC in the same slot (what the BIOS declare as a slot), so there was a lot of breaking NIC naming changes over the years in systemd to figure out the right heuristics that are safe, enabling/disabling slot naming if there is a PCIe bridge, but just in some cases.
Also for historical reasons the PCIe slot number was read indirectly leading to some conflicts in some cases (this was fixed in systemd 257)
Every year's cope with systemd.
My first system migrated in less than 10 minutes, incl. package downloads and reboot. It's not a beast either. N100 mini PC connected to a ~50mbps network.
If your sources file references the release name (e.g.:bookworm), you change them to trixie, then “apt update && apt dist-upgrade”.
or,
If your sources file directly reference distro-suites (e.g.: stable), you just “apt update && apt dist-upgrade” since stable is now pointing to trixie.
In the first reboot, you run “apt autopurge” to remove packages which are not needed anymore.
…and you’re done.
For RedHat family, this is nigh impossible, requiring hours of planning, downtimes, etc., and even then, it's not guaranteed to work.
If you prefer a rolling release, and won't use it on a server, Debian Testing got you covered (unless you are in a security sensitive environment). My systems are well-isolated from outside, so desktop systems can run Testing without issues.
Servers and exposed systems always run stable with security updates installed automatically, though.
https://www.debian.org/donations
Not affiliated, just a happy user for a long, long time.
pip3 install <whatever> --prefix=/usr/local
will install into /usr/local/local, so one has to use the prefix /usr. The same command on, say, OpenSuSE will install into /usr and break your system. Barking mad.https://sources.debian.org/src/python3.7/3.7.3-2+deb10u3/deb...
Certainly a terrible UX, but the motivation is clear: they're trying to get PEP 668 protections for older versions.
Virtual environments work a lot better anyway, honestly. (With a properly crafted `pyvenv.cfg`, it should be possible to convince Python that your /usr/local is a virtual environment, but I can't be sure offhand if there are any serious negative consequences of that.)
https://www.python.org/downloads/release/python-370/
Why is that the default?
Then my private laptop has had a bunch of graphic issues after upgrading to 13 (it manifests differently in a lot of applications and it changes when you pick a different desktop theme, not even sure how to describe it). The new pipewire (pulseaudio replacement, idk why that needed replacing) does not work properly when the CPU is busy (so I currently play games without game sounds or music in the background). The latter then also sometimes (1 in 5 times maybe?) crashes when resuming from suspend, but instead of dying, spams systemd which diligently stores it all in a shitty binary file (that you can't selectively prune), runs completely out of disk space, and breaks various things on the rest of the system until you restart the pipewire process and purge any and all logs (remember, no selective pruning)... Tried various things I found in web searches and threw an LLM at it as well, but no dice. I assume these issues are from it not being a fresh install, so no blame/complaint here really, just annoying and I haven't had these issues when doing previous upgrades. Not yet sure how to resolve, perhaps I'll end up doing a completely new install and seeing what configs I can port until issues start showing up
Surely these things are not a Debian-specific issue, but I haven't noticed something like that with either 11 or 12
Edit: oh yeah, and the /tmp(fs) counter is at 1 so far. I wonder how many times I'll have run out of RAM by Debian 14, by forgetting I can't just dump temporary files into /tmp anymore without estimating the size correctly beforehand
In Gnome, Blender becomes unresponsive but everything else is still usuable. In Cinnamon, the entire system becomes unresponsive.
Wow, I'm amazed a third of packages haven't seen an update in, ehm checks
> After 2 years, 1 month, and 30 days of development, the Debian project is proud to present its new stable version
I'm a fan of old software myself, in the sense that I find it cool to see F-Droid having a (usually tiny) package that is over 10 years old but it does exactly what I want with no bugs and it works perfectly on Android 10. I wonder if those 30% more commonly fall in the "it's fine as it is" category or in the "no maintainers available" category
What I did is switch to NetBSD.
They all still have DVD reader drives and are nice for ripping CDs. Despite the fact that the drives are nearing 20 years of age (machines are from ~2005) they still perform better than most “new” external drives. Of course one could also move the drives to a newer machine but many of them use the IDE connector which is not commonly found on modern systems. Also, modern cases typically don't account for (multiple) 5.25" drives.
The other use case is to flash microcontrollers. When fiddeling around with electronics there is always a risk of a short circuit or other error that could in worst case kill the attached PC's mainboard. I feel much safer attaching my self-built electronics to an old machine than to my amd64 workstation.
Due to their age, I think the old machines may not live much longer -- I fear not even 10 more years, some of my old 32-bit laptops have already failed. Hence even for me it does not make sense to try keeping up the software support. Maybe I switch them to a BSD or other Linux distribution if they live long enough but for now the machines run OK with Debian Bookworm (newly oldstable), too.
I suppose as some kind of headless home server it could still have been useful. OTOH for something that runs 24/7 a RPi would use a fraction of the electricity and still be a lot more powerful.
So yes, beyond nostalgia and some embedded/industrial usecases, it's hard to see a use for a 32-bit only PC these days.
Hardware support is good and UI is great! It feels snappier than Ubuntu, may be due to lack of snap and fewer services and applications installed by default.
1.) sudo apt-get update && sudo apt-get --yes upgrade && sudo apt-get --yes autoremove --purge
2.) Update all entries of bookworm to trixie in /etc/apt/sources.list.
3.) sudo apt full-upgrade
4.) sudo reboot
5.) sudo apt modernize-sourcesWhat does this mean? If all 69k+ packages are installed, it will take up this much space?
Lucky 13 and all... And not a single issue so far. Very happy!
Thanks to the Debian team for putting out yet another high quality, reliable release :)
Does anyone have any suggestions for a 32-bit distro that's still being updated?
In addition to Mint and AntiX there is also Slackware (I use xfce rather than KDE) but adding software outside the (large) base install is not an 'apt-get thing' process.
Salix Linux, based on Slackware, does have package repositories with a reasonable selection of applications. Salix is based on the stable Slackware 15.0 so has nicely aged packages.
Void Linux also has an xfce4 based live i686 iso you can look at and decide to install from.
That being said, I like Flatpak, so I installed it (was super easy and Flathub provides instructions), and I added a few Gnome Shell extensions (a Dock so my wife can find apps when she occasionally uses my laptop).
Debian gives you a feeling of ownership of your computer in a way the corporate distros don't, but is still pretty user friendly (unlike Arch).
I'd definitely install Debian Stable on a grandparents' computer.
Even a bare Slackware with KDE and KDEi (and even XFCE) can do tons of work by itself by just adding an user and accepting the default group belonging array by pressing 'up' at the prompt.
Heck, even OpenBSD, minus the volume automount, which can be handled in a breeze with toadd or tray-app in seconds; and if you are smart you can figure DBUS/FDo mount points and integrate then with XFCE/Plasma/Gnome without too much issues (hotplugd can handle device umounting if you set up doas.conf accordinly).
The rest? MESA and X.org will handle most of the graphics stuff. Video and audio drivers are autodetected on almost every GNU and *BSD. Printers are often wireless bound so any assistant with look it up fast and attach it to CUPS.
Still, I can't handle DPKG/APT's slowness, even if there are libre distros as Trisquel with it. If they rebased their distro as a simpler Parabola LTS release with either Mate or LXDE setups, the user experience would be almost the same, but installing packages would happen at a much faster pace.
Night-and-day decision-making process compared to Fedora and Arch, which both replaced SDL2 with sdl2-compat, broke a bunch of SDL2 apps because sdl2-compat isn't actually SDL2-compatible yet, and sent everyone to yell at the SDL team about it.
I’m not familiar with the metric definition they use, but I’d be worried if close to 100% of the packages they included in bookworm hadn’t been updated in the roughly 2 years between releases.
I use Debian for most of my servers, so I’m sure there is a valid explanation of that phrase.
And even if it was?
If you look at the number of packages in Debian, only a small portion have CVEs. There are nearly 30k package sources, and an output of 60k binary packages.
Yet we only get a few security updates weekly.
Another example? Both trixie and bookworm use the same firefox ESR (extended release) version. Both will get updated when firefox forces everyone to the next ESR.
Beyond that, some packages are docs. Some are 'glue' packages, eg scripts to manage Debian. These may not change between releases.
Lastly, Debian actually maintains an enormous number of upstream orphaned packages. In those cases, the version number is the same (sometimes), but with security updates slapped on if required.
From my perspective, outside of timely and quick security updates, I have zero desire for a lot of churn. Why would I? Churn means work. Churn means changed stability.
We get plenty of fun and churn from kernel, and driver related changes (X, Wayland, audio/nic, etc), and desktop apps. And of course from anything running forward, with scissors, like network connected joy.
Code doesn't "go bad" and not everything is affected by ecosystem churn and CVEs.
An established package not having updates for 2y is not in and of itself problematic.
I've found it pretty easy though to use some KDE components built from source on top of the standard Debian packages. Build with kdesrc-build, then have those binaries linked to from your ~/bin and you're set. It might get difficult if you want to rebuild some key components like plasmashell itself but I've been using locally built versions of Kate and Konsole without issue.
Issues I experiencing right now *AFTER* migration Debian 12 -> 13 on X11: 1. kwin_x11, kscreenlocker_greet, Xorg and ksmserver are eating CPU as hell after screen is locked: known issues in KDE6/QT6, affected versions Qt 6.8, 6.9 and 6.10, fixed in Qt 6.9.2 and 6.10.0. Guess which version of Qt is used in Debian 13. 2. "Disks & Devices" applet is reporting wrong disk state (mounted/unmounted) when disk have Device Auto-Mount enabled "On Attach". 3. Track-point device, touch-pad and Bluetooth mouse now is treated as a single device without a way of separating settings for them. Yep, touch-pad settings tab says "No touchpad found". Right now I basically need either agree with changed way of navigation in CAD and browsers with my mouse, or forget about using scroll with my track-pad, or switch the "Press middle button and move mouse to scroll" on every occasion of switching input device.
And all of these I found in just two days of using this release. I need to stress the fact that before the update, when it was Debian 12 and KDE5 it was working smoothly, as expected and with all settings accessible from UI. Now, to fix all of this, I need to crawl configs, add scripts, workarounds and so on. It looks like Debian updates now needs to be ignored the same way as it happening with Ubuntu (3+ month), MacOS (5+ month) or Windows (don't).
My first contact with Linux was with Debian 2.1. Exactly with this distro CDs https://archive.org/details/linux-actual-06-2/LinuxActual_01...
To be honest, it was a miserable experience to install it on your main computer without anything else available to look for help in case of problems. It was also hard to really try it due to lack of drivers for current (at that moment) ADSL modems.
But here I am a crapload of years later, still loving it :-)
Have had my RPi on Debian since Debian 9, with smooth upgrades every time.
Can't find release notes though, help?
I've found the docs, but they're huge. I'll looking specifically for a list of differences between 4.4 and 5.10. (Or at least the biggest differences)
That's too big. I'm going to need a smaller distro.
This makes Debian Trixie about 32 times larger than Windows XP with approximately 45 millions lines of code, arguably the best Windows OS ever.
Debian Trixie is released about 24 years after Windows XP.
Naah. Early(ish) Win7, or maybe even late Vista.
Alas it's still not suitable as a daily driver for the average home user and probably never will be. It is unfortunate that Ubuntu has to reign supreme in that regard.
> Alas it's still not suitable as a daily driver for the average home user and probably never will be. It is unfortunate that Ubuntu has to reign supreme in that regard.
It's true that Ubuntu used to be the OOB ready version of Debian, which "just worked", while base Debian took look of fiddling to even have wifi working.
These day though I find the opposite to be true: Ubuntu does lots of weird things I don't want, and I have to "fiddle" to disable all that. A base Debian install however (ISO with firmware bundled), just works.
For me, Ubuntu is officially off my list of distros I bother spending my time on.
In fact, Ubuntu has never been an especially user friendly distro. At the beginning it was just a debian that was installed with debian's experimental installer before they decided to use it in stable. Nothing more, nothing less.
If you wanted to find a distro that was making efforts towards beginners looking for Gui config tools, you had to look at Suse and Mandrake (now Mandriva).
The only specific thing Ubuntu did for beginners is sending CDs for free at a time when not everybody had fast internet connections and would look for paper magazine to come with CD/DVD. And they have stopped doing that a loooooong time ago.
I think that's fine for Debian. Maybe even a good thing.
Debian supplies a rock solid base for many general purpose tasks. Ubuntu and other distros are free to package that up in a user friendly way, but as a technical user I want to be able to go upstream and get a basic Linux system without extra stuff.
Why not?
My family members need little more than a web browser, media player, and office suite. Debian Stable is very suitable here; arguably more so than other distros, which tend to require maintenance more often.