In this particular case, the poster is complaining that 3 CVEs were assigned for memory corruption vulnerabilities reachable only from the dnsmasq configuration file. I didn't read carefully, but the presumption that config file memory corruption bugs aren't vulnerabilities is problematic, because user input can find its way into configurations through templating; it depends on how innocuous the field triggering the bug is.
You explain that the CVE makes no sense, and you’re met with the response that “ok but did when”
This is inevitable when you boil everything down to a number. When that number refers to a (potentially) costly bug, people shirk critical thinking and just go straight for zero-tolerance.
Not ideal but I'm not sure if there's a better way :/
Some of it is surprisingly well known by name too!
A person might be implementing dozens or hundreds of pieces of software from multiple vendors. Now there are CVEs on their radar. They have to deal, and assess.
What do they do?
Do a deep dive on every CVE, including looking at code, validating what the CVE represents, and assessing security risk org wide, no matter how and where and in what way the software is used? Is code even available?
Or, is the prudent thing to say "CVE -- we need the vendor to resolve".
How much work must an end user put in, when a CVE is there?
I agree 100% that this is terrible, but my point is to at least understand it from the side of implementation. What I tend to do is use my distro for everything I possibly can. This provides an entity that is handling CVEs, and even categorizing them:
https://security-tracker.debian.org/tracker/source-package/o...
This helps reduce the need to handle CVEs directly. Not eliminate of course, but vastly reduce it. Output of clicking on a CVE is helpful with a rating:
https://security-tracker.debian.org/tracker/CVE-2021-36368
This rating may be because it does not affect debian in its default config, or because something isn't compiled in, or the impact is truly low, or so on.
This gives me something to read if I must, and to grasp when I have no time to deep dive. I trust debian to be reasonably fast and work well to resolve CVEs of importance, and properly triage the rest.
Yes, I know of edge cases, yes I know of the fact that seldom used packages often need an end user to report a CVE. It can and does happen. But the goal here is "doing our very best" and "proving we'd doing that".
So this helps by allowing me to better focus on CVEs of vendor products I use, and get a better grasp on how to pursue vendors.
Yet when dealing with the infrastructure of smaller companies -- they just don't have the time. They still have to manage the same issues as a larger company, that being SoC2 compliance or what not, as well as liability issues in their market sphere.
And the thing is, I'm willing to bet larger companies are far worse at this CVE chicanery. It's just rote to them. Smaller companies have flexibility.
Here's a hotlist for making at least some of this manageable, because if you give people information, you don't have to respond as much:
* have a RSS feed, or a webpage which is only updated if there is a security update for your software
* have a stable and development(bleeding edge) branch. One branch only has security updates and never new code. Maybe, possibly bugfixes, but bugfixes must not break the API, config files, or create requirements for newer versions of libraries
* provide a mailing list never ever ever used for marketing purposes, which alerts users to new updates for software. never spam that email address. ever.
Important:
If you have outstanding CVEs, list them somewhere on a static page, with a description of what the issue is, and how you've triaged it. If you believe it's a bogus CVE, say so. If you think it only causes issues in certain circumstances, and is thus less important that other CVEs you are working on, say so.
Keep all CVEs here by simply updating the page to indicate a CVE was resolved, but also with a version/commit and date of when. Again, information resolves so many issues.
Do these things, and your end users will love you, and it will engender more trust that security issues are being dealt with seriously. (Note: not saying that aren't, but if you make it easy for people to know when updates come out, lots of questions stop being asked)
When engineers see this sort of thing, they love you. They become stronger advocates. It falls under marketing as much as technical due diligence.
I am very sympathetic to the idea that all memory corruption bugs should be fixed systematically, whether or not they're exploitable. It works well for OpenBSD. And, well, I wouldn't have leaned into Rust so early if I wasn't a bit fanatic about fixing memory corruption bugs.
But at the same time, a lot of maintainers are stretched really thin. And many pieces of software choose to trust some inputs, especially inputs that require root access to edit. If you want to take user input and use it to generate config files in /etc, you should plan to do extremely robust sanitization. Or to make donations to thinly-stretched volunteer maintainers, perhaps.
CVEs also cannot be denied by projects, and are often used as an avenue of harassment towards open source projects.
I agree with the poster on that mailing list, this is not, nor should be, a CVE. At no point can you edit those files without being root.
Even if you need to be root to edit the files, it still is a deviation from the design or reasonably expected behaviour of that interface, so is still a bug and should still get a CVE. It should either be fixed or failing that documented as 'wont fix' and on the radar of anyone building an application. Someone building the next plesk or cpanel or similar management system should at least know about filtering their input and not allowing it to get to the dangerous config file.
Re: Harassment - Can't the project release a statement saying that the bug writeup is low quality and unable to be reproduced? Anyone ignoring that without question and using it as evidence that the project is bad without proof is putting way too much value in CVEs and the fault is their own
Imagine a router has a web/cli interface for setting the DHCP server’s domain name. At some point the users’s data is forwarded to a process exiting the root-owned file.
Hypothetically, If a vulnerability in the parsing of such from the config could be exploited from the end-user, that would certainly matter.
And these things always seem to be one step away from bugs that allow arbitrary injection into the config file…
(I’m amazed at the hot messes exposed with HTTP and SMTP regarding difference in CR/CRLF/LF handling. Proxy servers and even “git” keep screwing this up…)
If the person templating isn't validating data, then it's already RCE to let someone template into this config file without careful validation.
... Also, this is a segfault, the chance anyone can get an RCE out of '*r = 0' for r being slightly out of bounds is close to nil, you'd need an actively malicious compiler.
While CVE's in theory are "just a number to coordinate with no real meaning", in practice a "Severity: High" CVE will trigger a bunch of work for people, so it's obviously not ideal to issue garbage ones.
If the argument is "CVSS is a complete joke", I think basically every serious practitioner in the field agrees with that.
While the relevant configuration does require root to edit, that doesn’t mean that editing or inserting values to dnsmasq as an unprivileged user doesn’t exist as functionality in another application or system.
There are frivolous CVEs issued without any evidence of exploitability all the time. This particular example however, isn’t that. These are pretty clearly qualified as CVEs.
The implied risk is a different story, but if you’re familiar with the industry you’ll quickly learn that there are people with far more imagination and capacity to exploit conditions you believe aren’t practically exploitable, particularly in highly available tools such as dnsmasq. You don’t make assumptions about that. You publish the CVE.
The developer typically defines its threat model. My threat model would not include another application inserting garbage values into my application's config, which is expected to be configured by a root (trusted) user.
The Windows threat model does not include malicious hardware with DMA tampering with kernel memory _except_ maybe under very specific configurations.
How many wireless routers generate a config from user data plus a template. One’s lucky if they even do server side validation that ensures CRLFs not present in IP addresses and hostnames.
And if Unicode is involved … a suitcase of four leaf clovers won’t save you.
The people running the software define the threat model.
And CNA’s issue CVEs because the developer isn’t the only one running their software, and it’s socially dangerous to allow that level of control of the narrative as it relates to security.
Is this the case? As we're seeing here, getting a CVE assigned does not require input or agreement from the developer. This isn't a bug bounty where the developer sets a scope and evaluates reports. It's a common database across all technology for assigning unique IDs to security risks.
The developer puts their software into the world, but how the software is used in the world defines what risks exist.
Some devices do this more securely than others. If you're able to inject newlines, it's highly likely that you can already achieve command execution by injecting directives. I wrote a bit about this technique here: https://blog.nns.ee/2025/07/24/dnsmasq-injection-trick/ (sorry for the self-plug). I think it's up to the device vendor to do this securely and not a concern for dnsmasq.
However, in this case, I feel like the concern is elsewhere and not the sole responsibility of the device vendors. Even if the vendor does templating securely, the vulnerable config options could still trigger the bug in dnsmasq itself and give some advantage to the attacker. Assuming the vulnerabilities themselves are legit, I'm finding it difficult to classify these issues as "bogus".
The affected version 2.73rc6 is quite interesting, because it is from 2015, and it is not the version the relevant code was introduced in, that is even older (guessing 2.62). Why fuzz some random release candidate from ten years ago?
Even more interesting v2.77 from 2017 (commits 5614413 and 2282787 to be precise) changed the code and added an (++i == maxlen) check at the place that is being highlighted by CVE-2025-12198 as lacking an (i < maxlen) check. The commit message says it fixed a crash and thanks a friend for fuzzing the config file.
Now i am not well versed in heap smashing with C, so don't confuse my lack of skill with an expert opinion, but i have a hard time understanding how that check is circumvented in recent versions of the code. Any explanation would be welcome.
But more than that someone should verify if this PoC works in recent versions. As a prerequisite it should be shared internationally.
Sure, CVE isn't optimal but virtually no model is. It's the whole point basically to provide a simplification of reality to be able to reason about it.
I know these questions are technically answered out there on the internet. But I looked into it a couple of years ago after finding a horrible bug in a popular npm package and the answers weren't clear to me.
Can a CVE be issued in retrospect?
For most (but certainly not all) projects, you fill out a simple form [0]. I've done it before and it's fairly easy.
> and what software is covered by them?
All software is covered by someone, usually by the vendor themselves or MITRE.
> Can a CVE be issued in retrospect?
Absolutely, but it's fairly uncommon.
The first issue being raised is that replacing the configuration file shouldn't count as a vulnerability. Usually I'd agree, but the fact that it causes memory corruption from user input warrants at least a low severity report.
If we can't prove that a vulnerability is exploitable, we have to keep our assumptions minimal. If the memory corruption vuln is provably unexploitable, a future code change could surface it as a plausible exploit primitive. It can also point to a section of code that may have been under-speced, and may serve as an signal to pay more attention at these sections for related bugs. Also, it doesn't seem right to assume that the config files will always be under a privileged directory.
The second issue being discussed iun the mailing list is that it's LLM slop. While the reports do seem to be AI generated, I haven't seen any response about the PoC failing, but maybe there is a significant problem where a lot of PoCs are fake.
So many assumptions. As commander Data may have said today, "the most elementary and valuable statement in security, the beginning of wisdom, is 'I do not know.'"
However, it doesn't necessarily matter if it's submitted by an incompetent human, a malicious human, or is AI slop. The end effect of wasting time on a non-vulnerability is the same
In a world where generating AI slop is cheap, the standard should probably be that the person submitting a vulnerability needs to prove it is a vulnerability, and probably that they're a person. Having the person receiving it prove it isn't won't scale