That's pretty bad! I wonder what kind of bounty went to the researcher.
I'd be surprised if it's above 20K$.
Bug bounties rewards are usually criminally low; doubly so when you consider the efforts usually involved in not only finding serious vulns, but demonstrating a reliable way to exploit them.
The fundamental thing to understand is this: The things you hear about that people make $500k for on the gray market and the things that you see people make $20k for in a bounty program are completely different deliverables, even if the root cause bug turns out to be the same.
Quoted gray market prices are generally for working exploit chains, which require increasingly complex and valuable mitigation bypasses which work in tandem with the initial access exploit; for example, for this exploit to be particularly useful, it needs a sandbox escape.
Developing a vulnerability into a full chain requires a huge amount of risk - not weird crimey bitcoin in a back alley risk like people in this thread seem to want to imagine, but simple time-value risk. While one party is spending hundreds of hours and burning several additional exploits in the course of making a reliable and difficult-to-detect chain out of this vulnerability, fifty people are changing their fuzzer settings and sending hundreds of bugs in for bounty payout. If they hit the same bug and win their $20k, the party gambling on the $200k full chain is back to square one.
Vulnerability research for bug bounty and full-chain exploit development are effectively different fields, with dramatically different research styles and economics. The fact that they intersect sometimes doesn't mean that it makes sense to compare pricing.
Got a Gmail ATO? Just run it against some of the leaked cryptocurrency exchange databases, automatically scan for wallet backups and earn hundreds of millions within minutes.
People are paying tens of thousands for “bugs” that allow them to confirm if an email address is registered on a platform.
Even trust isn’t much of a problem anymore, well-known escrow services are everywhere.
Even revealing enough details, but not everything, about the flaw to convince a potential buyer would be detrimental to the seller, as the level of details required to convince would likely massively simplify the work of the buyer should they decide to try and find the flaw themselves instead of buying. And I imagine much of those potential buyers would be state actors or organized criminal groups, both of which do have researchers in house.
The way this trust issue is (mostly) solved in drugs DNM is through the platform itself acting as a escrow agent; but I suspect such a thing would not work as well with selling vulnerabilities, because the volume is much lower, for one thing (preventing a high enough volume for reputation building); the financial amounts generally higher, for another.
The real money to be made as a criminal alternative, I think, would be to exploit the flaw yourself on real life targets. For example to drop ransomware payloads; these days ransomware groups even offer franchises - they'll take, say, 15% of the ransom cut and provide assistance with laundering/exploiting the target/etc; and claim your infection in the name of their group.
It’s why Firefox and Safari as so important despite HN’a wish they’d go away.
Sadly, mozilla is now an adtech company (https://www.adexchanger.com/privacy/mozilla-acquires-anonym-...) and by default firefox now collects your data to sell to advertisers. We can expect less and less privacy for firefox users as Mozilla is now fully committed to trying to profit from the sale of firefox users personal data to advertisers.
Mozilla - wrongly - believes that the majority of FF users believe in Mozilla's hobby projects rather than that they care about their browser.
That's why - as far as I know - to this day it is impossible to directly fund Firefox. They'd rather take money from google than to be focusing on the one thing that matters.
Although I must admit to the guilty pleasure of gleefully using Chromium-only features in internal apps where users are guaranteed to run Edge.
/s
Secondly as a sibling pointed out lots of apps have html ads so if you show a malicious ad it could also trigger. I’m old enough to remember the early google ads which which google made text-only specifically because google said that ads were a possible vector for malware. Oh how the turns have tabled.
That being said as many above have pointed out you can choose not to bring in dependencies. The Chrome team already does this with the font parser library they limit dependencies to 1 or 2 trusted ones with little to no transitive dependencies. Let's not pretend C / C++ is immune to this we had the xz vuln not too long ago. C / C++ has the benefit of the culture not using as many dependencies but this is still a problem that exists. With the increase of code in the world due to ai this is a problem we're going to need to fix sooner rather than later.
I don't think the supply chain should be a blocker for using rust especially when once of the best C++ teams in the world with good funding struggles to always write perfect code. The chrome team has shown precedent for moving to rust safely and avoiding dependency hell, they'll just need to do it again.
They have hundreds of engineers many of which are very gifted, hell they can write their own dependencies!
Memory safety related security exploits happen in a steady stream in basically all non-trivial C projects, but supply chain attacks, while possible, are much more rare.
I'm not saying we shouldn't care about both issues, but the idea is to fix the low hanging fruit and common cases before optimizing for things that aren't in practice that big of a deal.
Also, C is not inherently invulnerable to supply chain attacks either!
Edit: Replying to ghusbands:
'unsafe' is a core part of Rust itself, not a separate language. And it occurs often in some types of Rust projects or their dependencies. For instance, to avoid bounds checking and not rely on compiler optimizations, some Rust projects use vec::get_unchecked, which is unsafe. One occurrence in code is here:
https://grep.app/pola-rs/polars/main/crates/polars-io/src/cs...
And there are other reasons than performance to use unsafe, like FFI.
Edit2: ghusbands had a different reply when I wrote the above reply, but edited it since.
Edit3: Ycombinator prevents posting relatively many new comments in a short time span. And ghusbands is also wrong about his answer not being edited without him making that clear.
By the way, I am having these kind of arguments since Object Pascal, back when using languages safer than C was called straighjacket programming.
Ironically, most C wannabe replacements are Object Pascal/Modula-2 like in the safety they offer, except we know better 40 years later for the use cases they still had no answer for.
Edit: If you reply with a reply, rather than edits, you don't get such confusion.
Unfortunately, "seen in the wild" likely means that they _also_ had a sandbox escape, which likely isn't revealed publicly because it's not a vulnerability in properly running execution (i.e., if the heap were not already corrupted, no vulnerability exists).
Given the staggering importance of the projects they should really have a full-time, well-staffed, well-funded, dedicated team combing through every line, hunting these things down, and fixing them before they have a chance to be used. It'd be a better use of resources than smart fridge integration or whatever other bells and whistles Google has most recently decided to tack onto Chrome.
What I mean by this is that github has a limit (in my understanding) on the sizes of public repos.
Chromium bypasses that & this is the reason why if you fork Chromium, you can get unlimited storage in github iirc
Github gets to start really wonky when you do this tho (iirc)
I can be wrong, I usually am but wanted to share this fact to just share the absurd scale of how large chromium is.
Saying "Markdown has a CVE" would sound equally off. I'm aware that its not actually CSS having the vulnerability but when simplified that's what it sounds like.
This is the "impact" section on https://github.com/huseyinstif/CVE-2026-2441-PoC:
Arbitrary code execution within the renderer process sandbox Information disclosure — leak V8 heap pointers (ASLR bypass), read renderer memory contents Credential theft — read document.cookie, localStorage, sessionStorage, form input values Session hijacking — steal session tokens, exfiltrate via fetch() / WebSocket / sendBeacon() DOM manipulation — inject phishing forms, modify page content Keylogging — capture all keystrokes via addEventListener('keydown')
I get that css has changed a lot over the years with variables, scopes and adopting things from less/sass/coffee, but people use no-script for the reason because javascript is risky, but what if css can be just as risky... time to also have no-style?
Honestly, pretty excited for the full report since it's either stupid as hell or a multi-step attack chain.
No, we don't. All of the ones we have are heavily leveraged in Chromium or were outright developed at Google for similar projects. 10s of billions are spent to try to get Chromium to not have these vulnerabilities, using those tools. And here we are.
I'll elaborate a bit. Things like sanitizers largely rely on test coverage. Google spends a lot of money on things like fuzzing, but coverage is still a critical requirement. For a massive codebase, gettign proper coverage is obviously really tricky. We'll have to learn more about this vulnerability but you can see how even just that limitation alone is sufficient to explain gaps.
And not in a trivial “this line is traversed” way, you need to actually trigger the error condition at runtime for a sanitizer to see anything. Which is why I always shake my head at claims that go has “amazing thread safety” because it has the race detector (aka tsan). That’s the opposite of thread safety. It is, if anything, an admission to a lack of it.
> 10s of billions are spent to try to get Chromium to not have these vulnerabilities, using those tools. And here we are.
Shouldn't pages run in isolated and sandboxed processes anyway? If that exploit gets you anywhere it would be a failure of multiple layers.
However if you have arbitrary code execution then you can groom the heap with malloc/new to create the layout for a heap overflow->ret2libc or something similar
Chromium uses probably the single most advanced sandbox out there, at least for software that users are likely to run into.
Chromium is filled with sloppy and old code. Some of the source code (at least if dependencies are included) is more than 20 years old, and a lot of focus has been on performance, not security.
Using Rust does not necessarily solve this. First, performance-sensitive code can require 'unsafe', and unsafe allows for memory unsafety, thus going back to square one, or further back. And second, memory safety isn't the only source of vulnerabilities. Rust's tagged unions and pattern matching help a lot with general program correctness, however, and C++ is lagging behind there.
Chromium is also some of the most highly invested in software with regards to security. Literally entire technologies that we now take for granted (seccomp-ebpf comes to mind) exist to make Chrome safe. Sanitizers were a Google project that Chromium was an aggressive adopter and contributor towards. I could go on.
> Using Rust does not necessarily solve this. First, performance-sensitive code can require 'unsafe', and unsafe allows for memory unsafety, thus going back to square one, or further back.
This isn't really true? I have no idea what "further back" means here. The answer seems to just be "no". Unsafe does allow for memory unsafety but it's hilarious to me when people bring this up tbh. You can literally `grep unsafe` and ensure that your code in that area is safe using all sorts of otherwise insanely expensive means. Fuzz that code, ensure coverage of that code, run `miri`, which is like a sanitizer on steroids, or literally formally verify it. It's ridiculous to compare this to C++ where you have no "grep for the place to start" capability. You go from having to think of 10s of millions of lines of code that holds a state space vastly greater than the number of particles of this universe 100000000x over, to a tiny block.
With the level of investment that Google puts into things like fuzzing, Rust would have absolutely made this bug harder to ship.
> And second, memory safety isn't the only source of vulnerabilities.
It's the source of this one and every ITW Chromium exploit that I can recall off of the top of my head.
A more robust alternative to disabling features is isolating where they execute. Instead of stripping out CSS/JS and breaking sites, you stream the browser session from a remote server.
If a zero-day like CVE-2026-2441 hits the parser, it crashes (or exploits) the remote instance, not your local machine. You get to keep the rich web experience (CSS, JS, fonts) without trusting your local CPU with the parsing logic. It’s basically "air-gapping" your browser tab. Not perfect, as attackers could still add a third step to compromise your remote and flow back to your local, but isolation like this adds defense in depth that then needs to compromise a much narrower attack surface (the local-remote tunnel) than if you ran it locally.
https://issues.chromium.org/issues/483569511 - [TBD][483569511] High CVE-2026-2441: Use after free in CSS. Reported by Shaheen Fazim on 2026-02-11
> Access is denied to this issue. Access to this issue may be resolved by signing in.
Interesting they are listing archived projects and not OSS-Fuzz. What's the reason for this?
also this seems chromium only so it doesnt impact firefox ?
honestly curious. do you think "based on chrome" means they forked the engine and not just "applied some UI skin"?
Why do you dream things up? no chrome based browser will do html/js/css different. it's all hooked deep in the engine. they only add things on top and change the chrome (the confusingly named ux component)
This is an interesting thought to me (like, how does one create a zero-day that doesn't look intentional?) but the more I think about it, the more I start to believe that this fully is not necessary. There are enough faulty humans and memory unsafe languages in the loop that there will always be a zero-day somewhere, you just need to find it.
(this isn't to say something like the NSA has never created or ordered the creation of a backdoor - I just don't think it would be in the form of an "unintentional" zero-day exploit)
However, exploits that are known (only) by a state actor would most definitely be a closely guarded secret. It's only convenient for a state to release information about an exploit when either it's been made public or it has more consequences for not releasing.
So yes, exactly what you said. It's easier to find the exploits than to create them yourself. By extrapolation, you would have to assume that each state maintains its set of secret exploits, possibly never getting to use them for fear of the other side knowing of their existence. Cat & Mouse, Spy vs Spy for sure.
>In December 2013, a Reuters news article alleged that in 2004, before NIST standardized Dual_EC_DRBG, NSA paid RSA Security $10 million in a secret deal to use Dual_EC_DRBG as the default in the RSA BSAFE cryptography library https://en.wikipedia.org/wiki/Dual_EC_DRBG
But you are also right that this is not the only way they work. With the XZ Utils backdoor (2024), we normal nerds got an interesting glimpse into how they create a zero-day. It was luckily discovered by an american developer not looking for zero-days, just debugging a performance problem.
The term has long watered-down to mean any vulnerability (since it was always a zero-day at some point before the patch release, I guess is those people's logic? idk). Fear inflation and shoehorning seems to happen to any type of scary/scarier/scariest attack term. Might be easiest not to put too much thought into media headlines containing 0day, hacker, crypto, AI, etc. Recently saw non-R RCEs and supply chain attacks not being about anyone's supply chain copied happily onto HN
Edit: fwiw, I'm not the downvoter
In a security context, it has come to mean days since a mitigation was released. Prior to disclosure or mitigation, all vulnerabilities are "0-day", which may be for weeks, months, or years.
It's not really an inflation of the term, just a shifting of context. "Days since software was released" -> "Days since a mitigation for a given vulnerability was released".
This seems logical since by etymology of zeroday it should apply to the release (=disclosure) of a vuln.
Zero-day vulnerability or zero-day exploit refer to the vulnerability, not the vulnerable software. Hence by common sense the availability refers to the vulnerability info or the exploit code.