Designing a modern and secure kernel in 2025 as a monolith is a laughable proposition. Microkernels are the way to go.
* In modern times, the practical benefit from a microkernel is minimal. Hardware is cheap, disposable, and virtual machines exist. The use case for "tolerate a chunk of the kernel misbehaving" are minimal.
* To properly tolerate a partial crash takes a lot of work. If your desktop crashes, you might as well reboot.
* Modern hardware is complex and there's no guarantee that rebooting a driver will be successful.
* A monolithic kernel can always clone microkernel functionality wherever it wants, without paying the price elsewhere.
* Processes can't trust each other.
The last one is a point I hadn't realized for a while was an issue, but it seems a tricky one. In a monolithic kernel, you can have implicit trust that things will happen. If part A tells part B "drop your caches, I need more memory", it can expect that to actually happen.
In a microkernel, there can't be such trust. A different process can just ignore your message, or arbitrarily get stuck on something and not act in time. You have less ability to make a coherent whole because there's no coherent whole.
> A different process can just ignore your message
> arbitrarily get stuck on something and not act in time
This doesn't make sense. An implementation of a microkernel might suffer from these issues, it's not a problem of the design itself. There are many ways of designing message queues.
Also:
> In a microkernel, there can't be such trust [between processes]
Capabilities have solved this problem in a much better and scalable way than the implicit trust model you have in a monolithic kernel. Using Linux as an example of a monolith is wrong, as it incorporates many ideas (and shortcomings) of a microkernel. For example: how do you deal with implicit trust when you can load third-party modules at run-time? Capabilities offer much greater security guarantees than "oops, now some third-party code is running in kernel mode and can do anything it wants with kernel data". Stuff like the eBPF sandbox is like a poor-man's alternative to the security guarantees of microkernels.
Also, good luck making sure the implicitly trusted perimeter is secure in the first place when the surface area of the kernel is so wide it's practically impossible to verify.
If you allow me an argument from authority, it is no surprise Google's Fuchsia went for a capability-based microkernel design.
It’s design largely failed at being a modern generic operating system and it’s become primarily an os used for embedded devices which is an entirely different set of requirements
It’s also not that widely used. There’s only a handful of devices that ship fuschia today. There’s a reason for that.
I've seen this exact opinion before, only the year in it was "1992". And yet Linux was still made and written regardless of it.
Someone may come along and correct me about BSD. Apologies I'm not super familiar with it's history.