I appreciate that his anger is still there, its just worded differently, in a more modern (softer) attack. Is this still how we control developers in 2025? Yes! Deal with it or go fork it yourself. The day this goes away is the day Linux begins to die a death of 1,000,000 cuts in code quality.
Kinda relevant: https://www.nytimes.com/2024/05/14/magazine/native-language-...:
> My own introduction to speaking French as an adult was less joyous. After reaching out to sources for a different article for this magazine with little success, I showed the unanswered emails to a friend. She gently informed me that I had been yelling at everyone I hoped to interview.
> Compared with English, French is slower, more formal, less direct. The language requires a kind of politeness that, translated literally, sounds subservient, even passive-aggressive. I started collecting the stock phrases that I needed to indicate polite interaction. “I would entreat you, dear Madam ...” “Please accept, dear sir, the assurances of my highest esteem.” It had always seemed that French made my face more drawn and serious, as if all my energy were concentrated into the precision of certain vowels. English forced my lips to widen into a smile.
That's a huge sacrifice when speaking of him, which we must appreciate. But to be honest, I must agree with his point of view.
Golden rule when developing projects is to stick to one (the least amount possible of) technology, otherwise, you'll end up with software for which you need to hire developers of different languages or accept the developers that won't be experts in some of them. I am working on a project that, up until a year ago, had been partly written in Scala. All the Java developers who didn't know Scala were doomed to either learn it in pain (on errors and mistakes) or just ignore tasks regarding that part of the system.
Multiple years later, what was the state of things? We had a portion of the codebase in Kotlin with dedicated native/Kotlin developers, and a portion of the codebase in RN with dedicated RN/JS developers.
Any time there's a bug it's a constant shuffle between the teams of who owns it, which part of the code, native or JS the bug is coming from, who's responsible for it. A lot of time time nobody even knows because each team is only familiar with part of the app now.
The teams silo themselves apart. Each team tries its best to hold on to the codebase - native teams tries to prevent JS team from making the whole thing JS, the JS team tries to covert as much to JS as possible. Native team argues why JS features aren't good, JS team argues the benefits over writing in native. Constant back and forth.
Now, no team has a holistic view of how the app works. There's massive chunks of the app that some other team owns and maintains in some other language. The ability to have developers "own" the app, know how it works, have a holistic understanding of the whole product, rapidly drops.
Every time there's a new feature there's an argument about whether it should be native or RN. Native team points out performance and look-and-feel concerns, RN team points out code sharing / rapid development benefits. Constant back and forth. Usually whoever has the most persuasive managers wins, rather than on technical merit.
Did we end up with a better app with our new setup, compared to one app, written in one language, with a team of developers that develop and own and know the entire app? No, no I don't think so.
Feels like pretty parallel of a situation compared to Rust/C there.
In my current job, also at FAANG, my team (albeit SRE team, not dev team), owns moderately sized codebases in C++, Go, Python and a small amount of Java. There are people “specialised” in each language, but also everyone is generally competent enough to at least read and vaguely understand code in other languages.
Now of course sometimes the issue is in the special semantics of the language and you need someone specialised to deal with it, but there’s also a large percentage which is logic problems that anyone should be able to spot, or minor changes which anyone can make.
The key problem in the situation you described seems to be the dysfunction in the teams about arguing for THEIR side, vs viewing the choice of language as any other technical decision that should be made with the bigger picture in mind. I think this partly stems from unclear leadership of how to evaluate the decision. Ideally you’d have guidance on which to prioritise between rapid development and consistency to guide your decisions and make your language choice based on that.
As your codebase scales beyond a certain point, siloing is pretty inevitable and it is better to focus on building a tree of systems and who is responsible for what. However that doesn’t absolve especially the leads from ONLY caring about their own system. Someone needs to understand things approximately to at least isolate problems between various connected systems, even if they don’t specialise in all of them.
Was this at Meta? I doubt the iOS FB app and Insta are using RN so that must leave FB messenger?
The Linux kernel's first "release" was in 1991, hit 1.0 in 1994, and arguably the first modern-ish release in 2004 with the 2.6 kernel. Rust's stable 1.0 release was in 2015, 13 years ago. There are people in the workforce now who were in middle school when Rust was first released. Since then, it has seen 85 minor releases and three follow-on editions, and built both a community of developers and gotten institutional buy-in from large orgs in business-critical code.
Even if you take the 1991 date as the actual first release, Rust as a stable language has existed for over 1/3 of Linux's public development history (and of course had a number of years of development prior to that). In that framing, I think that it's a little unfair to include it in the "hip new thing" box.
Even many backend devs seem to shy away from things like SQL because they're not too comfortable with it. Which isn't bad per se, it's very easy to make a small mistake in a query that crushes the database, just a personal observation of mine.
The modern web is a gigantic mess with security features hacked on top of everything to make it even remotely secure and the moment it hit the desktop thanks to electron we had cross site scripting attacks that allowed everyone to read local files from a plugin description page. If anything it is the ultimate proof how bad things can go.
I have used around 20+ in my little project (mostly solo) and I have now:
* Rust
* Kotlin
* Swift
* Html
* Sql with variations for: Postgres, Sqlite, Sql Server, Firebird, DBISAM, MySql
* Go
* FreePascal
And now is when I have less languages.
People are flexible.
I have worked on lots of cross language codebases. While it’s extremely useful to have experts in language or part, one can meaningfully contribute to parts written in other languages without being an expert. Certainly programmers on the level of kernel developers should readily be able to learn the basics of Rust.
There’s lots of use cases for shared business logic or rendering code with platform specific wrapper code, e.g. a C++ or Rust core with Swift, Kotlin, and TypeScript wrappers. Lots of high level languages have a low level API for fast implementations, like CPython, Ruby FFI, etc. The other way around lots of native code engines have scripting APIs for Lua, Python, etc.
If our testing framework is in Python; writing a wrapper to code tests for your feature in Perl because you're more comfortable with it is the Wrong way to do it imo.
But if writing a FluentD plugin in Ruby solves a significant problem in the same infra; the additional language could be worth it.
Everything is about tradeoffs.
Anyway, I’m a bit of a Rust fanboy, and would generally argue that its use in kernel and other low-level applications is only a net benefit for everyone, and doesn’t add much complexity compared to the rest of these projects. But I could also see a 2030 version of C adding a borrow checker and more comparable macro features, and Rust just kind of disappearing from the scene over time, and its use in legacy C projects being something developers have to undo over time.
What makes this interesting is that the difference between C code an Rust code is not something you can just ignore. You will lose developers who simply don't want or can spend the time to get into the intricacies of a new language. And you will temporarily have a codebase where 2 worlds collide.
I wonder how in retrospect they will think about the decisions they made today.
Or just using those kernel APIs, period.
gccrs will allow the whole thing to be built with GCC toolchain in a single swoop.
If banks are still using COBOL and FORTRAN here and there, this will be the most probable possibility in my eyes.
I suppose the biggest reason is that C programmers are more likely than not trained to kinda know what the assembly will look like in many cases, or have a very good idea of how an optimizer compiler will optimize things.
This reminds me I need to do some non-trivial embedded project with Rust to see how it behaves in that regard. I'm not sure if the abstraction gets in the way.
So, I think if you do the same thing with Rust, you'll have that intuition, as well.
I have a friend who writes embedded Rust, and he said it's not as smooth as C, yet. I think Rust has finished the first 90% of its maturing, and has the other 90%.
This is the only way Hellwig's objection makes any kind of sense to me. Obviously, intra-kernel module boundaries are no REST-APIs, where providers and clients would be completely separated from each other. Here I imagine that both the DMA module as well as its API consumers are compiled together into a monolithic binary, so if assumptions about the API consumers change, this could affect how the module itself is compiled.
Like half the point of high-level systems languages is to be able to express the _effects_ of a program and let a compiler work out how to implement that efficiently (C++ famously calls this the as-if rule, where the compiler can do just about anything to optimise so long as it behaves in terms of observable effects as-if the optimisation hadn't been performed - C works the same). I don't think there's really any areas left from a language perspective where C is more capable than C++ or Rust at that. If the produced code must work in a very specific way then in all cases you'll need to drop into assembly.
The thing Rust really still lacks is maturity from being used in an embedded setting, and by that I mostly mean either toolchains for embedded targets being fiddly to use (or nonexistent) and some useful abstractions not existing for safe rust in those settings (but it's not like those exist in C to begin with).
Or even C++, that many forget was also born at Bell Labs on the UNIX group, the main reason being Bjarne Stroutroup never wanted to repeat his Simula to BCPL downgrade ever again, thus C with Classes was originally designed for a distributed computing Bell Labs research project on UNIX, that Bjarne Stroutroup certainly wasn't going to repeat the previous experience, this time with C instead of BCPL.
Plus for that kind of things you have "deterministic C" styles which guarantee things will be done your way, all day, every day.
For everyone answering: This is what I understood by chatting with people who write Rust in amateur and pro settings. It's not something of a "Rust is bad" bias or something. The general consensus was, C is closer to the hardware and allows handling of quirks of the hardware better, because you can do "seemingly dangerous" things which hardware needs to be done to initialize successfully. Older hardware is finicky, just remember that. Also, for anyone wondering. I'll start learning Rust the day gccrs becomes usable. I'm not a fan of LLVM, and have no problems with Rust.
The decision was not made today, what happens today (or, rather, a few days ago) is Linus calling out a C maintainer going out of his way to screw rust devs. Rust devs have also been called out for shitty behaviour in the process.
The decision to run a Rust experiment is a thing that can be (and is) criticized, but if you allow people to willfully sabotage the process in order to sink the experiment, you will also lose plenty of developers.
As for your concern about code quality, it's the exact same situation that already exists today. The maintainer is responsible for his code, not for the code that calls it. And the Rust code, is just another user.
What if you're in a world where Rust code is either a significant or primary consumer of your interface ... surely as the API designer, you have to take some interest in how your API is consumed.
I'm not saying you become the owner of Rust bindings, or that you have to perform code-reviews, or that you have veto power over the module .. but you can't pretend Rust doesn't exist.
C maintainers who don't care about Rust may have opinions about the Rust API, but that's not the same thing :)
There are definitely things that can be done in C to make Rust's side easier, and it'd be much easier to communicate if the C API maintainer knew Rust, but it's not necessary. Rust exists in a world of C APIs, none of which were designed for Rust.
The Rust folks can translate their requirements to C terms. The C API needs to have documented memory management and thread safety requirements, but that can be in any language.
Sometimes the other team proves incompetent and you are forced to do their job. However that is an unusual case. So trusting other teams to do their job well (which includes trying something you don't like) is a good rule.
Yes. This is exactly what it is. It is a "pragmatic compromise" to side-step major internal cultural and philosophical issues (not technical issues). You're basically telling a number of C maintainers that they can simply pretend Rust doesn't exist, even if it may be the case that Rust code is the primary consumer of that API. That's a workable solution, but it isn't an ideal solution - and that's a little sad.
So if someone wants to write software in Rust that just uses the DMA driver, that should be fine. Linus is entirely in the right.
Yes. And that involves not completely ignoring an entire universe of consumers of your API, *as a general policy*. This is especially true with modules that may have Rust code as the primary consumer of the API.
I admit, I don't know what not ignoring Rust code by maintainer means in practice, and I agree it shouldn't mean that the C maintainer code-reviews the Rust bindings, or has veto power over the entire Rust module, or that the maintainer vets the architecture or design of the Rust module, or is on the Rust module mailing list. But it also shouldn't be that as a *general policy*, the C maintainer does not take any interest in how the API is consumed by Rust, and worse, pretends Rust doesn't exist.
>So if someone wants to write software in Rust that just uses the DMA driver, that should be fine.
That part is sensible. Did I argue otherwise?
It seems to me as if you're speaking about a hypothetical scenario where Rust needs something from the interface that isn't required by other languages. And you can't articulate what that might be because you can't think of an example of what that would look like. And also, in this scenario, Rust is the primary user of this driver interface.
But if that's the case, it's getting really close to "if things were different, they'd be different". If that's not the case, then I don't understand your case.
There's nothing wrong with the interface. Rust can use it just fine. It doesn't do anything C code wouldn't. They're not even asking for anything from what I can see. The person who maintains the DMA driver doesn't want Rust _using_ his interface, he's rejecting PRs where Rust code is interfacing with his driver.
The closest analogy I can think of is he wrote a book, but he doesn't want left-handed people to read it.
The API maintainer should only be concerned how the API is consumed in only that it is consumable and doesn't cause unintended side effects. And neither of those should be impacted by the language used to consume the API.
You didn't, but Christoph Hellwig did -- which is what started off this whole kerfuffle last week.
starts writing business plan while installing CMake
I hate it so much when people assume they are smart and workaround issue at other end of interface. It always end up that you need to understand both the workaround and original bug or you can't even read it.
And Hellwig works as a contractor, he's not a volunteer in the same way that Con Kolivas was. Hellwig isn't truly independent either.
Especially those mailing list, engineering marvel, indeed!
Why would you say something like that?
From the e-mail [0] the article is based on:
> The fact is, the pull request you objected to DID NOT TOUCH THE DMA LAYER AT ALL.
> It was literally just another user of it, in a completely separate subdirectory, that didn't change the code you maintain in _any_ way, shape, or form.
If enough people get behind a Rust OS, it could leapfrog Linux. I guess people just don't dream big anymore.
That's a good thing. This will test rust's reliability.
The "free" part in free software is not just free in beer, it's also free in freedom. That little bit gets forgotten. People work on it because they want to, not because they have to. If a developer does not want to use Rust, they can and should not be forced to. It does not matter if Rust is objectively safer, or better, or any of the purported arguments. Forcing it eliminates the freedom of choice.
The Rust folks should make their own kernel and OS. Let it compete directly with Linux. In open source, this is the way.
"The Rust folks" are part of that existing body of maintainers, not some outside force.
"Freedom" doesn't mean freedom from Rust, for starters.