1. Starting especially a few years before 2000 but continuing today, the software industry is quite profitable, pays well, and has lots of openings, while the a academic job market in systems research continues to pay poorly and has much more limited openings. So if you want to do systems software research while also having an enjoyable quality of life, you might as well go to a company and get paid well instead of spending your days writing a thesis and grant proposals.
2. Computer science is a field where the cost of basic research equipment is low (a computer), and more interesting research environments generally are beyond the scale of academia (tens of thousands of hardware nodes, hundreds or thousands or more QPS of production load, etc.). That makes it quite different from e.g. biology or high-energy physics on one end where you usually need to be in academia to get access to the equipment or e.g. mathematics (including theoretical CS) and literature on the other where it doesn't matter where you are; in systems research you only get access to the equipment from being in industry.
That doesn't mean that systems software research, done in industry, is (or was or will be) irrelevant; it means that the narrower definition of "research" as "that which is done in academia" is inaccurate (including industry with the trappings of academia, i.e., people at Google or Bell Labs writing papers in academic journals and hiring people with Ph.D.s). Systems software research happens in industry and is quite relevant to itself.
Commercial research needs to keep in mind the existing legacy systems used by the sponsor. Innovations are more evolutionary instead of revolutionary as the field matures. They may be more tailored to observable pain points of the research sponsor. They may not be widely shared if they yield results providing a competitive advantage. While it may not demand immediate returns, commercial research does have an axe to grind. All of this hampers advancement in the field of computer science in general.
I also don't know if there's any kind of commercial research on the scale of XEROX PARC or Bell Labs. I can't think of any off the top of my head. Microsoft and Google do some pretty neat research, but I don't think they've shipped anything quite on a similar scale.
There's really no organization hiring the best talent to work on the kind of black swan events commercial research may miss. For example, I think it'd be cool to have a microcode-based OS; I've heard it would help with keeping operating systems secure. But who would fund it, and who would work on it? Right now it doesn't look like anybody would, and that might be what Rob is concerned about.
- Linux's read-copy-update synchronization mechanism. It has been described in papers, but you're better off following mailing list posts or LWN writeups.
- Rust's borrow checker and lifetime system. It's built on existing well-known ideas (e.g. affine types) and there's since been some academic work on formalizing it, but the specific system Rust uses has no direct precedent, is pretty novel, and was developed outside academia. (Note that Rust came out of Mozilla Research, which is far, far smaller than Bell Labs but also an organization that intentionally works on revolutionary and not evolutionary improvements.)
- libdill and Trio's structured concurrency, a solid theoretical framework for handling async/await-shaped problems without turning your execution into concurrent spaghetti. The techniques are not unprecedented, but https://vorpus.org/blog/notes-on-structured-concurrency-or-g... is a better framing of it.
I think the real impairment to OS research is deployment. If your idea isn't compatible with one of the existing OSs, in such a way that it can run a web browser, then nobody's going to use it. Heck, even Windows Phone couldn't get adoption. OS ideas that require people to completely rewrite applications and interaction paradigms are non-starters no matter what benefits they offer - unless they can fulfil a need that can't be fulfilled any other way. So quite a lot of work goes into bypassing the OS entirely for hardware-specific single-program networking applications, and everyone else has to keep with their existing paradigms.
Even totally plain-looking device could be full of innovative research: a network router which uses completely new kernel. A new network protocol or a compression algorithm. New programming language. Automatic verification and/or fuzzing tools. A network of internet of things devices which share no code with any of the existing OSs.
Systems software research has come a long way since 2000.
And the number one thing that could have gotten better in the last 19 years but didn't: security.
It's almost like human society is trying really hard to keep developers busy.
That is because the majority of people fundamentally do the same things with computers than they did 20 years ago. Browse the web, edit pictures, videos, put together presentations, document layout, spreadsheets, etc.
Of course now your home videos are in 4K instead of 320p, and webpages are 10MB of JS instead of 10k of text... but these are changes in scale, not in kind.
However, shiny features is what gets people attracted to your platform, so we get shininess (never mind if functionality actually gets lost in the process).
The perfect illustration of this for me is George RR Martin, a professional writer of indisputable success, doing all of his writing work on a 1980s workstation with WordStar 4.
In 2000, people mostly still used Windows 9x. A single-user system with no sandboxing and no built-in firewall.
This is an astonishing claim: what makes you think it hasn't gotten better? It's gotten a LOT better since 2000.
(amount of data to protect * number of systems that store or handle data * level of risk) - mitigation
...you'll probably agree that the mitigation mechanisms improved 100x but the risk improved even more.
How can operating systems research be relevant when the resulting operating systems are all indistinguishable?
[...]
Linux is the hot new thing... but it's just another Unix.
Although they are rooted in FP notions of purity and immutability, I would say that NixOS and Guix try to fundamentally change operating systems.
Does "started in academia" not count? Because that'd give you easy counter-examples, e.g. Scala, Spark.
* "development" as in making a technology usable, not software development
If you look at networking, recently, there has been the move towards new protocols (quic) that was the result of systems research looking at the deficiencies of tcp. Another area is consensus algorithms. We now have large scale real life deployments of consensus algorithms, for example Spanner and etcd.
The late 90’s and early 2000’s were a weird time where the hardware was improving so fast and taking software along for a free ride that a lot of software was good enough. Now, as we bump more into the end of Moore’s Law, we will be seeing more research and real life usage of multicore and heterogenous computing and libraries and languages and operating systems that try to make that easier.
Would you say that wasn’t the case during the past 20 years (2000-2019)? Or do you consider all that period to be “early 2000’s”?
Granted, whilst it is system-level it is not system software. And it has not yielded demos that people have regarded as cool, rather ones that have been received by some as horrifyingly worrying.
But it has definitely influenced industry.
OS X was modern for its time, but where they’ve really pushed the envelope is with iOS. They can simply move faster at scale than anyone else because they almost entirely own the IP for both the software and all major hardware components and can pivot on a dime compared to market-based coordination.
There was almost nothing innovative about OS X, even when it came out. It was just packaged and marketed very well. Objectice-C and NeXTSTEP was a user land improvement over typical C user lands, but that's not saying much.
> OS X was modern for its time
It really wasn't. The Mach "microkernel" was from outdated 80s research. It's bloated, slow and inflexible compared to the state of the art at the time.
iOS was innovative at a UI/UX level, definitely. But I can't really think of anything they did at a systems level that was at all innovative?
https://github.com/intel/spark-mpi-adapter
Oh look, a paper; "For example, a recent case study found C with MPI is 4.6–10.2×faster than Spark on large matrix fac-torizations on an HPC cluster with 100 compute nodes"
Does it sound like large data analytics would have horribly stagnated?
I'm amazed at how many comments resolve around "But wait, of course systems research has evolved, see XX and YYY", followed by responses along the lines of "Nah, he was not talking about XX and YYY, rather ZZZ, etc..."
I hate being the "please define xxx" guy, but is there a consensual definition of what "Systems software" is ?