This doesn’t seem grounded in reality. If you follow the link to the “hooks” that Windows eBPF makes available [1], it’s just for incoming packets and socket operations. IOW, MS is expecting you to use the Berkeley Packet Filter for packet filtering. Not for filtering I/O, or object creation/use, or any of the other million places a driver like Crowdstrike’s hooks into the NT kernel.
In addition, they need to be in the kernel in order to monitor all the other 3rd party garbage running in kernel-space. ELAM (early-launch anti-malware) loads anti-malware drivers first so they can monitor everything that other drivers do. I highly doubt this is available to eBPF.
If Microsoft intends eBPF to be used to replace kernel-space anti-malware drivers, they have a long, long way to go.
[1]: https://microsoft.github.io/ebpf-for-windows/ebpf__structs_8...
Just to use an analogy: Imagine people do their banking on JavaScript websites with Google Chrome, but if they use Microsoft Edge it says "JavaScript isn't supported, please download and run this .EXE". I'm not sure we'd be asking "if" Microsoft would support JavaScript (or eBPF), but "when."
Also this problem of too much software running in the kernel in an unbounded manner has long existed. Why should Microsoft suddenly invest in solving it on Windows?
Mind you, it looks like after 20-ish years Windows still supports loading legacy filter drivers. Given the considerable work that goes into getting even a simple filesystem minifilter driver working reliably, it's safe to assume that we'd be looking at a similarly protracted transition period.
As to the performance, I don't think the raw infrastructure to support minifilters is the major performance hit. The work the drivers themselves end up doing tends to be the bigger hit in my experience.
Some background for the curious:
https://www.osr.com/nt-insider/2019-issue1/the-state-of-wind...
They do have rigorous tests (WHQL), it's just Crowdstrike decided that was too burdensome for their frequent updates, and decided to inject code from config files (thus bypassing the control).
The fault here is entirely with Crowdstrike.
Internally, Microsoft is running more and more workloads on Linux and externally, I've had .Net team tell me more than once that Linux is preferred environment for .Net. SQL Server team continues to push hard for Linux compatibility with every release.
EDIT: Windows Desktop gets more love because they clearly see that as important market. I'm talking more Windows Server.
I think the part I specifically dispute is the only negative outcome is wasted CPU cycles. That's likely the case for the class of bug, but there are plenty of failure modes where a bad ruleset could badly brick a system and make it hard to recover.
That's not to say eBPF based security modules isn't the right choice for many vendors, just that let's understand what risks they do and do not avoid, and what part of the failure chain they particularly address.
Microsoft has been working on eBPF for a few years at least.
https://opensource.microsoft.com/blog/2021/05/10/making-ebpf...
https://lwn.net/Articles/857215/
If you're really concerned, they have discussions and communication channels where you're invited to air your concerns. They're listed on their github:
https://github.com/microsoft/ebpf-for-windows
Who knows, maybe they already have answers to your concerns. If not, they can address them there.
Also your statement is sometimes not true, although I certainly sympathise in the mainline case. In some contexts you really do need to keep on trucking. The first example to spring to mind is "the guidance computers on an automated Mars lander"; the round-trip to Earth is simply too long to defer responsibility in that case. If you shut down then you will crash, but if you do your best from a corrupted state then you merely probably crash, which is presumably better.
If you attempt to load an eBPF program that the verifier rejects, the syscall to load it fails with EINVAL or E2BIG. What your user-space program then does is up to you, of course.
3rd party hooking into kernel is 3rd party responsibility. It is like equipping your car with LPG - THAT hooks into engine (kernel). And When I had a faulty gas pressure sensor then my car actually halted (BSOD if you will) instead of automatically failing over to gasoline as it is by design.
You can argue that car had no means to continue execution but kernel has, however invalid kernel state can cause more corruption down the road. Or as parent even points out - carry out lethal doses of something.
Updates would require the caller to call different functions which means putting the responsibility in the hands of the caller, where it should be, instead of on whoever has a side channel to tamper with the kernel.
You end up with the work-perfectly-or-not-at-all behavior that you're after because if the function that goes with the indicated hash is not present, you can't call it, and if it is present you can't call it in any way besides how it was intended
Furthermore, the reaction to a malformed state need not be "ignore". It could disable restricted user login; or turn off the screen.
If the worry is that this is viable to abuse by malware, well, if the malware can already rewrite the on-disk files for the AV, I wonder whether it's really a good idea to trust the system itself to be able to deal with that. It'd probably be safer to just report that up the security foodchain, and potentially let some external system take measures such as disable or restrict network access. Better yet, such measures don't even require the same capabilities to intervene in the system, merely to observe - which makes the AV system less likely to serve as a malware vector itself or to cause bugs like this.
If the failed system is a security module, I think that's absolutely correct. If the system runs, without the security module, well, that's like forgetting to pack condoms on Shore Leave. You'll likely be bringing something back to the ship with you.
Someone needs to be testing the module, and the enclosing system, to make sure it doesn't cause problems.
I suspect that it got a great deal of automated unit testing, but maybe not so much fuzz and monkey (especially "Chaos Monkey"-style) testing.
It's a fuzzy, monkey-filled world out there...
eBPF is fantastic, and it can be used for many purposes and improve a lot of things, but this is IMO overselling it. Assuming that BPF itself it free of bugs, it’s still a rather large sprawl of kernel hooks, and those hooks invoke eBPF code, which can call right back into the kernel. Here’s a list:
https://www.man7.org/linux/man-pages/man7/bpf-helpers.7.html
bpf_probe_read_kernel() is particularly heavily used, and it is not safe. It tries fairly hard not to OOPS or crash, but it is definitely not perfect.
The rest of that list contains plenty of this that will easily take down a system, even if it doesn’t actually oops or panic in the process.
And, of course, any tool that detects userspace “malicious behavior” and stops it can start calling everything malicious, and the computer becomes unusable.
Meanwhile, eBPF has no real security model on the userspace side. Actual attachment of an eBPF program goes through the bpf() syscall, not through sensibly permissioned operations on the underlying kernel objects being attached to, and there is nothing whatsoever that confines eBPF to, say, a container that uses it. (See bpf_probe_read_kernel() -- it's fundamentally able to read all kernel memory.)
So, IMO, most of the benefit of eBPF over ordinary kernel C code is that eBPF is kind of like writing code in a safe language with a limited unsafe API surface. It's a huge improvement for this sort of work, but it is not perfect by any means.
> The verifier is rigorous -- the Linux implementation has over 20,000 lines of code
The verifier is absurdly complex. I'd rather see something based on formal methods than 20kLOC of hand-written logic.
Doing this from bpf assumes that all "allowed" addresses are side-effect-free and will either succeed or cleanly fault. Off the top of my head, MMIO space (including, oddities like the APIC page on CPUs that still have that) and TDX memory are not in this category.
Isn’t one of the purposes of an OS to police software? I get that this has to do with the OS itself, but what does watching the watchers accomplish other than adding a layer which must then be watched?
Why not reduce complexity instead of naively trusting that the new complexity will be better long term?
Old way: Load kernel driver, hook into bazillions of system calls (doing whatever it is you want to do), pray you don't screw anything up (otherwise you can get a panic though not necessarily--Linux is quite robust).
eBPF way: Just ask eBPF to tell you what you want by giving it some eBPF-specific instructions.
There's a rundown on how it works here: https://ebpf.io/what-is-ebpf/
> …via a very picky sandbox…
When the eBPF is a CrowdStrike mechanism, and eBPF is “picky,” it is clearly “watching the watchers.”
> There are other ways to reduce risks during software deployment that can be employed as well: canary testing, staged rollouts, and "resilience engineering" in general
You don't need a new technology to implement basic industry-standard quality control
My impression is that the WebAssembly verifier is much simpler.
If microsoft includes a hardcoded whitelist that covers some essentials needed for recovery that could make a bug in such a tool easier to fix, but could still cause effective downtimes (system running but unusuable) until such a fix is delivered.
> eBPF, which is immune to such crashes.
I tried to Google about this, but I cannot find anything definitive. It looks like you can still break things. Can an expert on eBPF please comment on this claim? This is the best that I could find: https://stackoverflow.com/questions/70403212/why-is-ebpf-sai...Unless of course there is a bug in eBPF (https://access.redhat.com/solutions/7068083) @brendangregg and the kernel panics/ BSoDs anyway which you mention later in the article of course.
“Many eyes” is a bit dubious in general but the Linux kernel is pretty much the best case for it being true.
Assuming every security critical system will be on a recent enough kernel to support this...
https://blogs.oracle.com/linux/post/oracle-linux-and-bpf
$ cat /etc/redhat-release /etc/oracle-release /proc/version
Red Hat Enterprise Linux release 8.10 (Ootpa)
Oracle Linux Server release 8.10
Linux version 5.15.0-203.146.5.1.el8uek.x86_64 (mockbuild@host-100-100-224-48) (gcc (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9.2.0.1), GNU ld version 2.36.1-4.0.1.el8_6) #2 SMP Thu Feb 8 17:14:39 PST 2024Crowdstrike knows the computers they're running on, it is trivial to implement a system where only few designated computers download and install the update and report metrics before the update controller decides to push it to next set.
"Mitigation" is dealing with an outage/breakage after it occurs, to reduce the impact or get system healthy again.
You're talking about "prevention" which keeps it from happening at all.
Canarying is generic approach to prevention, and should not be skipped.
Avoiding the risk entirely (eBPF) would also help prevent outage, but I think we're deluding ourselves to say it "solves" the problem once and for all; systems will still go down due to bad deploys.
> In the future, computers will not crash due to bad software updates, even those updates that involve kernel code.
Come on. Computers will continue to crash in the future, even when using eBPF. I am quite certain.
I expect it can be solved within some limited contexts, but those contexts are not useful, at least not at the level of "generic kernel code".
So BPF introduced a very limited bytecode, which is complex enough that it can express long filters with lots of and/or/brackets - but which is limited enough it's easy to check the program terminates and is crash-free. It's still quite limited - prior to ~2019, all loops had to be fully unrolled at compile time as the checker didn't support loops.
It turned out that, although limited, this worked pretty well for filtering packets - so later, when people wanted a way to filter all system calls they realised they could extend the battle-tested BPF system.
Nobody is claiming to have solved the halting problem.
- Douglas Adams
Back compat seems to be such a shibboleth in the Windows world, but comes at an incredible price. The reasons cited all seem to boil down to keeping some imagined customers' obscure LOB app running for decades. But that seems like an excuse to me. Surely Microsoft would like to shake out the last diehards running some VB5 app on a patched up PC in a factory. Isn't it more beneficial to everyone to start sunsetting acres of ancient NT code and approaches and streamline the entire attack surface?
1. Compliance: everyone affected by this bug has auditors. Once safer alternatives are available, the standards like CIS, PCI, etc. will be updated to say you should use the new interface, and every enterprise IT department will have pressure to switch to eBPF tools. We saw this with BootLocker: storage encryption used to be a pain, people resisted it, but over time it became universal because the cost of swimming upstream was too high.
2. Signing. Microsoft can start requiring more proof of need and restrictions for signing drivers. They have to be careful to avoid the appearance of favoritism but after this debacle that’s a LOT easier. I would bet some engineer is working on a draft of mandatory fault handling and testing proof requirements for critical kernel drivers now and I would not be surprised to see it include a timeframe for adopting memory-safe languages.
If your code somehow still relies on some buggy behaviour to work, then MS shouldn't do anything to preserve that anymore - apparently they used to, but I'm not so sure nowadays.
However 'ancient NT' code should probably still function just fine since the Win32 API hasn't changed much for a while, and MS don't actively deprecate function calls (unlike Apple who seem to do it a bit on a whim recently). I would put this down to the API being pretty well designed in the first place.
This is obviously not true. It might be the worst it can do, by itself, to the currently running kernel. It's not the worst it can do to the machine or its user(s).
There are infinite harmful things an eBPF program can do. As can programs solely in user-space. There is a specific class of vulnerabilities being mitigated by moving code from kernel to BPF. That does not mean that eBPF programs are in general safe.
It's a move in the right direction but it probably won't fully mitigate issues like this for another 5+ years.
I take issue with that. Kernel programming was not to blame; looking up addresses from a file and accessing those memory locations without any validation is. The same technique would yield the same result at any Ring.
All of the service interruptions would have been just "computer temporarily not protected by crowdstrike agent". Not the same thing at all.
Significant and often far worse. It would leave the machine running unprotected.
When various apps running the world are crashing, unable to execute because malware protection is failing, there is no difference.
Yes, the kernel is fine and is not to blame. But running basically a rootkit controlled by a third party indeed is to blame.
That's still an outage for those key systems.
Given all its restrictions, I doubt something complex like a graphics driver would be possible. But then, I know nothing about graphics driver programming.
I think this undersells how annoying it is. There's a bit of an impedance mismatch. Typically you write code in C and compile it with clang to eBPF bytecode, which is then checked by the kernel's eBPF verifier. But in some cases clang is smart enough to optimize away bounds checks, but the eBPF verifier isn't smart enough to realize the bound checks aren't needed. This requires manual hacking to trick clang into not optimizing things in a way that will confuse the verifier, and sometimes you just can't get the C code to work and need to write things in eBPF bytecode by hand using inline assembly. All of these problems are massively compounded if you need to support several different kernel versions. At least with the Rust borrow checker there is a clearly defined set of rules you can follow.
"eBPF is a technology that can run programs in a privileged context such as the operating system kernel. It is the successor to the Berkeley Packet Filter (BPF, with the "e" originally meaning "extended") filtering mechanism in Linux and is also used in non-networking parts of the Linux kernel as well."
Surely that bars CrowdStrike's check for unprovably bounded vulnerabilities.
If you think you know a way to crash the Linux kernel by loading and running an eBPF program, you should report a bug.
"BPF originally stood for Berkeley Packet Filter, but now that eBPF (extended BPF) can do so much more than packet filtering, the acronym no longer makes sense. eBPF is now considered a standalone term that doesn’t stand for anything."
How about Microsoft's large government and commercial customers make it a requirement that MS does not develop a single new feature for the next two fucking years or however long it takes to go through the entirety of the Windows+Office+Exchange code base and to make sure there are no security issues in there?
We don't need ads in the start menu, we don't need telemetry, we don't need desktop Outlook becoming a rotten slow and useless web app, we don't need AI, we certainly don't need Recall. We need an OS environment that doesn't need a Patch Tuesday where we have to check if the update doesn't break half the canary machines.
And while MS is at that they can also take the goddamn time and rework the entire configuration stack. I swear to god, it drives me nuts. There's stuff that's only accessible via the registry (and there is no comprehensive documentation showing exactly what any key in the registry can do - large parts of that are MS-internal!), there's stuff only accessible via GPO, there's stuff hidden in CPLs dating back to Windows 3.11, and there's stuff in Windows' newest UI/settings framework.
Sandboxes are safe, but are ultimately virtual machines, and virtual machines can be made to live in a world that's not real.
Are they saying that device drivers should be written in eBPF?
Or maybe their drivers should expose an eBPF API?
I assume some driver code still needs to reside in the actual kernel.
> If your company is paying for commercial software that includes kernel drivers or kernel modules, you can make eBPF a requirement.
Windows soon, may still be atleast a year ahead. Would that be a fair statement? atleast being the operating keyword here.
Specifically in the context of network security software, for eBPF programs to be portable across windows/linux, we would need MSFT to add a lot more hooks and expose internal kernel stucts. Hopefully via a common libbpf definition. Otherwise, I fear, having two versions of the same product, across two OSs would mean more secuirty and quality issues.
I guess the point I am trying to make is, we would get there, but we are more than a few years away. I would love to see something like cilium on vanilla windows for a Software defined Company Wide network. We can then start building enterprise network secutiry into it. Baby steps!
---
btw, your talks and blog posts about bpftools is godsent!
Here I am using the term "EDR". Until this CrowdStrike debacle I'd never heard it.
Only tells how seriously you should take my opinions.
> eBPF (no longer an acronym) […]
Any reason why the official acronym was done away with?
1) Is CrowdStrike Falcon using eBPF for their Linux offering?
2) Would the faulty patch update get caught by the eBPF verifier?
Oh I'm sure they'll find a way.
Which is odd, given there’s been a bunch of kernel privesc bugs using eBPF…
I'm still waiting on my flying car...
100% BS. Even if they don't "crash" they will "stop functioning as intended" which is just the same. It's absolutely disgusting how this industry is now using this one outage as a talking point to further their totalitarian agenda.
It reminds me of how Google went after adblockers with their new extension model that also promised more "security". It's time we realised what they're really trying to do. In fact, I wonder whether this outage was not accidental after all.
For instance, why not find a subset of your customers that are low risk, push it out to them, and see what happens? Or perhaps have your own fleet of example installations to run things on first. None of which depends on any specific technology.
But the appeal-to-authority evidence that the article presents is not.
"-- the Linux implementation has over 20,000 lines of code -- with contributions from industry (e.g., Meta, Isovalent, Google) and academia (e.g., Rutgers University, University of Washington). The safety this provides is a key benefit of eBPF, along with heightened security and lower resource usage."
> If the verifier finds any unsafe code, the program is rejected and not executed. The verifier is rigorous -- the Linux implementation has over 20,000 lines of code [0] -- with contributions from industry (e.g., Meta, Isovalent, Google) and academia (e.g., Rutgers University, University of Washington).
[0] links to https://github.com/torvalds/linux/blob/master/kernel/bpf/ver... which has this interesting comment at the top:
/* bpf_check() is a static code analyzer that walks eBPF program
* instruction by instruction and updates register/stack state.
* All paths of conditional branches are analyzed until 'bpf_exit' insn.
*
* The first pass is depth-first-search to check that the program is a DAG.
* It rejects the following programs:
* - larger than BPF_MAXINSNS insns
* - if loop is present (detected via back-edge)
...
I haven't inspected the code, but I thought that checking for infinite loops would imply solving the halting problem. Where's the catch?The halting problem is only unsolvable in the general case. You cannot prove that any arbitrary piece of code will stop, but you can prove that specific types of code will stop and reject anything that you're unable to prove. The trivial case is "no jumps"—if your code executes strictly linearly and is itself finite then you know it will terminate. More advanced cases can also be proven, like a loop over a very specific bound, as long as you can place constraints on how the code can be structured.
As an example, take a look at Dafny, which places a lot of restrictions on loops [0], only allowing the subset that it can effectively analyze.
[0] https://ece.uwaterloo.ca/~agurfink/stqam/rise4fun-Dafny/#h25
A trivial example[1]:
int main() {
while (true) {}
int x = foo();
return x;
}
This program trivially runs forever[2], and indeed many static code analyzers will point out that everything after the `while (true) {}` line is unreachable.I feel like the halting problem is incredibly widely misunderstood to be similar to be about "ANY program" when it really talks about "ALL programs".
[1]: In C++, this is undefined behavior technically, but C and most other programming languages define the behavior of this (or equivalent) function.
[2]: Fun relevant xkcd: https://xkcd.com/1266/
but there are always sets of programs for which it is clearly possible to guarantee their termination
e.g. the program `return 1+1;` is guaranteed to halt
e.g. given program like `while condition(&mut state) { ... }` with where `condition()` is guaranteed to halt but otherwise unknown is not guaranteed to halt, but if you turn it into `for _ in 0..1000 { if !condition(&mut state) { break; } ... }` then it is guaranteed to halt after at most 1000 iterations
or in other words eBPF only accepts programs which it can proof will halt in at most maxins "instruction" (through it's more strict then my example, i.e. you would need to unroll the for-loop to make it pass validation)
the thing with programs which are provable halting is that they tend to also not be very convenient to write and/or quite limited in what you can do with them, i.e. they are not suitable as general purpose programming languages at all
I rather expect useful or needed code would be rejected due to "not-sure-it-halts", and then people will use some kind of exception or not use the verifier at all, and then we are back to square one.
Crowdstrike screwed the pooch here, yes. But after a couple of days I feel like I haven’t read enough blog posts and articles that crap on Microsoft. It’s their job to build a secure operating system, instead they deliver Windows and because they themselves cannot secure windows, they ship defender… and we use tools like falcon like a bandaid for Microsofts bad security practices
eBPF let's you prevent things too. seccomp filters can block syscalls.
The bigger problem is the performance you mentioned in 1. Crowdstrike's linux agent can work using eBPF instead of a kernel module, and will fall back to that if the current kernel version is more recent than the agent supports. But... then it uses up a lot more CPU.