So this is a pretty impactful fork. It's not like one of 8 core devs or something. This is 50% of the team.
Edit: Just noticed Sergey Kandaurov isn't listed on GitHub "contributors" because he doesn't have a GitHub account (my bad). So it's more like 33% of the team. Previous releases have been tagged by Maxim, but the latest (today's 1.25.4) was tagged by Sergey.
That said, I’m not sure how much leg he has to stand on for using the word nginx itself in the new product’s name and domain…
pretty sure they can't really do anything to him in Russia. Russia and US don't recognize each others patents, same as China.
https://freenginx.org/hg/nginx
I don't see it. Sure, he contributes. But in the last 3-4 years he definitely does not look like he is nginx based on that log. Or am I looking in the wrong place?
(Regardless, if you scroll back past March 2020, the timeline "resets" to this past year, and you see a ton of Dounin commits. Looks like an artifact of how the hg web viewer deals with large, long-lived branches getting merged.)
If the basketball or soccer team captain were also a ball hog, they'd have trouble keeping the bench full.
When you become lead, you have to let some of the code go, and the best way I know to do it is to only put your fingers into the things that require your contextual knowledge not to fuck up. If you own more than 10% of the code at this point, you need to start gift-wrapping parts of the code to give away to other people. If you own more than 20%, then you're the one fucking up.
Obviously this breaks down on a team size of 2, but then so do concerns about group and team dynamics.
- CVEs are gold to researchers and organizations like citations are to academics. In this case, the CVEs were filed based on "policy" but it's unclear if they are just adding noise to the DB.
- The severity of the bug is not as severe as greater powers-that-be would like to think (again, they see it as doing due diligence; developers who know the ins and outs might see it as an overreaction).
- Bug is in an experimental feature.
I'm not saying one way is right or not in this case, just pointing out my experience has generally been that CVEs are kind of broken in general...
>And, while the particular action isn't exactly very bad, the approach in general is quite problematic.
... to this specific bug in an experimental feature.
Originally I read your comment as Maxim doesn't want to use CVEs at all.
Yeah, very, very likely one and the same. Since 1989.
Where was the disagreement hashed out, so I can read more?
If nginx continues to receive more attention from security researchers, I imagine Maxim will have good reasons to backport fixes the other way too, or at least benefit from the same disclosures even if he does prefer to write his own patches as things do diverge.
Though history also shows that hostile forks rarely survive 6 months. They either get merged if they had enough marginal value, or abandoned outright if they didn't. Time will tell.
- nginx is "open core", with some useful features in the proprietary version.
- angie (a fork by several core devs) has a CLA, which sounds like a bait and switch waiting to happen and distro's won't package it
- freenginx is at least open source. But who knows if it'll still be around by June.
I had been an Apache user for quite some time, and thought I'd take a look at the (at that point, a few years old) "new" shiny thing. I found that something as simple as LDAP authentication required a payed plugin; a free Apache module has been available for this for ages. That made nginx a non-starter for this particular use case.
I wonder if the fork will accumulate free plugins for things that the old core required payed plugins for, slowly eroding their business case.
Unpaid Open Source developers tend to focus on interesting/cool core stuff and ignore all the stuff businesses care about (like LDAP authentication).
>In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers’ position.
F5 is a CNA and follows CVE program rules and guidelines, and we will err on the side of security and caution. We felt there was a risk to customers/users and it warranted a CVE, he did not.
On the other hand nginx core developers (the Russians) were arrogant to the point of considering anyone else as inferior and unworthy of their attention or respect, unless they contributed to nginx oss. They managed that project secretively and rewrote most “outside” contributions. They also ignored security issues - one internal developer spotted security issues with NGINX Unit (a failed oss project 20 years out of date before it started) and was told to fix the issues quietly and not to mention “security” anywhere in the issue messages or commit history.
So I can imagine exactly how these meetings would have gone, I’m sure it was the last straw!
For clarity are you referring to CVE-2024-24989 and -24990 (HTTP/3)?
> webserver, llc
From statement: "Instead, I’m starting an alternative project, which is going to be run by developers, and not corporate entities"
Ah, I completely forgot F5 was involved in this, probably most of everyone else and F5 gets no money from this. Shouldn't matter to them, do they even have competition in enterprise load balancer space? I spent 9 years of my career managing these devices, they're rock solid and I remember some anecdotes about MS buying them by the truckloads. They should be able to cover someone working on nginx, maybe advertise it more for some OSS goodwill.
Buuuut, they have by far the best support. They’re as responsive as Cisco, but every product isn’t a completely different thing, team, etc. And they work really well in a big company used to having Network Engineering as a silo. I’d only use them as physical hardware, though. As a virtual appliance, they’re too resource hungry.
Nginx or HA-Proxy are technically great for anything reasonable and when fronting a small set of applications. I prefer nginx because the config is easier to read for someone coming in behind me. But they take a modern IT structure to support because “Developers” don’t get them and “Network Engineers” don’t have a CLI.
For VMWare, NSX-V HA-Proxy and NSX-T nginx config are like someone read the HOWTO and never got into production ready deployments. They’re poorly tuned and failure recovery is sloooow. AVI looked so promising, but development slowed down and seemed to lose direction post acquisition. And that was before Broadcom. Sigh.
We'd get completely bogus explanations for bugs, escalate up the chain to VPs and leadership because there was an obvious training, understanding, and support for complex issues problem, and get the VPs trying to gaslight us into believing their explanations were valid. We're talking things like on our IPv4 only network, the reason we're having issues is due to bugs in the equipment receiving IPv6 packets.
So it's one of those things where I've personally been burned so hard by F5 that I'd probably to an unreasonable level look for other vendors. The only thing is, this was awhile ago, and the rumor's I've heard are that no one involved is still employed by F5.
I can’t imagine them supporting telco gear. The IPv6 thing has me LOLing because I just had a similar experience with a vendor where we don’t route IPv6 in that segment and even if we did, it shouldn’t break. Similarly, a vendor in a space they don’t belong that I imagine we bought because of a golf game.
A thing I dread is a product we’ve adopted being acquired… and worse, being acquired by someone extending their brand into a new area. It’s also why we often choose a big brand over a superior product. It’s not the issue of today, but when they get bought and by who. I hate that so much and not my decision, but it’s a reality.
It’s also a terrible sign if you’re dealing with a real bug and you’re stuck with a sales engineer and can’t get a product engineer directly involved.
I have a list of “thou shalt not” companies as well, and some may be similar where a few bad experiences ruined the brand for me. Some we’re still stuck with and I maaaay be looking for ways to kill that.
Handling a few thousand RPS is nothing to nginx, and doesn't require fancy hardware.
That said, it replaced Kemp load balancers, which it seems is the next biggest competitor in the hardware load balancer appliance space.
I think this because Nginx has a bunch of parsing quirks that are shared with AVI and nothing else.
Caddy seems like a wonderful alternative that does load balancing and static file serving but has wild config file formats for people coming from Apache/Nginx-land.
"Simplifying configuration: the location directive can define several matching expressions at once, which enables combining blocks with shared settings."
> Unfortunately, some new non-technical management at F5 recently decided that they know better how to run open source projects. In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers’ position.
Refers to F5's decision to publish two vulnerabilities as CVEs, when Maxim did not want them to be published.
IANAL, but i strongly recommend reconsidering the name as the current one contains a trademark.
Ingress was forked; the Post fork version of Ingress was called "Post"gres.
So maybe name this new project "PostX" (for Post + nginx).
Though that might sound too similar to posix.
> The INGRES relational database management system (DBMS) was implemented during 1975-1977 at the Univerisity of California. Since 1978 various prototype extensions have been made to support distributed databases [STON83a], ordered relations [STON83b], abstract data types [STON83c], and QUEL as a data type [STON84a]. In addition, we proposed but never prototyped a new application program interface [STON84b]. The University of California version of INGRES has been ‘‘hacked up enough’’ to make the inclusion of substantial new function extremely difficult. Another problem with continuing to extend the existing system is that many of our proposed ideas would be difficult to integrate into that system because of earlier design decisions. Consequently, we are building a new database system, called POSTGRES (POSTinGRES).
All F5 contributions to NGINX open source projects have been moved to other global locations. No code, either commercial or open source, is located in Russia.
yeah, yeah
But then perhaps he also has every right to do it, even though AFAIR the original author was somebody else.
Everyone has a right to forking the project. Only time will tell if they get critical mass of developers to keep it going.
Edit: I see now from the hg history that Igor hasn't been coding on Nginx for a decade actually.
In light of recently announced nginx memory-safety vulnerabilities I'd suggest migrating to Caddy https://caddyserver.com/
Using Caddy instead.
A point came where I realised I didn't enjoy Nginx. Configuring it was hard and it felt brittle.
A particular pain point is certificates/ssl. I absolutely dreaded doing anything with certificates in Nginx.
When I heard that Caddy automatically handles SSL/ certificates I jumped the nginx ship and swam as fast as I could to Caddy.
I didn't compile in fastcgi support in to my build, but it can be enabled.
for tini you mean https://github.com/krallin/tini? how large is your final docker image, why not just alpine in that case which is musl+busybox
https://mail.python.org/archives/list/mailman-users@python.o...
when your KPI is CVE's per month every bug looks like a CVE
F5 wants this feature prioritized over what Maxim planned, and Maxim doesn't have to comply, he is a volunteer.
Is the fork going to allow you to change the nginx Server response header (A PAID feature in the current fork...) without requiring you to mod it in and recompile it? :p
Yes - You read that correctly. They refuse to accept PR's to add additional functionality because that functionality is restricted to the paid version :p
https://web.archive.org/web/20240214184151/https://mailman.n...
see EU CRA
There was a time when I wanted to move away from it and was eyeing HAProxy, but the lack of the ability to serve static files didn't convince me. Then there was Traefik, but I never looked too much into it, because Nginx is working just fine for me.
My biggest hope was Cloudflare's Rust-based Pingora pre-announcement, which was then never published as Open Source.
Now that I googled for the Pingora name I found Oxy, which might be Pingora? Googling for this yields
> Although Pingora, another proxy server developed by us in Rust, shares some similarities with Oxy, it was intentionally designed as a separate proxy server with a different objective.
Any non-Apache recommendations? It should be able to serve static files.
I could have sworn that I've read about Nginx CVEs in the past.
Before you ask why would I do that, Ive got all Ethernet interfaces on dynamically IP created on a on-demand basis and only wanted ONE specific interface (non-public) to host the HTTP/HTTPS protocol.
And no, we do not want to jerry-rig some fancy nginx config file shell -script updater whenever an IP address gets assigned/reassigned.
Here came lighthttpd and Apache to the rescue.
Infrastructure like that should not be run by for-profit corporations anyway, it will always end up like in this case sooner or later
Probably a skill issue but when I last tried to compile Nginx from the Github mirror I spent hours trying to figure it out. I wish there was a GitHub page with an easy to understand build process... and that I could just run "cargo build --release" lol
make
make install
I just ran this to be sure I wasn't delusional and it took only 2 minutes.
https://github.com/nginx/nginx/tree/branches/stable-1.24
I cloned this and it doesn't have a makefile or configure script
Neither does the official repo?
https://hg.nginx.org/nginx/file/tip
Do you run it from /auto/?
OTOH, things that update too often seem to be more than slightly broken on an ongoing basis, due to ill-advised design changes, new bugs and regressions, etc.
Apple routinely holds back changes for a .0 release for advertising reasons. This means that they routinely have big releases that break everything at once. Bugs could come from 4 or 5 different sets of changes. But if they spread out changes… bug sources would be way more easy to identify.
And bug fix velocity going up could mean people stop treading water on bugs, and actually get to making changes to avoid entire classes of bugs!
Instead, people think the way to avoid bugs is to avoid updates, or do it all at once. This leads to iOS .0 releases being garbage, users of non-rolling release Linux distros to have bugs in their software that were fixed upstream years ago, and ultimately to make it harder to actually fix bugs.
If you want things not to break, you must slow down.
It isn’t reasonable to ask for these two things at once:
* lots of change
* stability
Eg: http3 support was stabilized with 1.25.1 , which came out June 2023.
That and a small collection of other things are standards based and not going though changes.
Too bad every team I've ever worked on as a consultant does the opposite. The biggest piles of shit I've ever seen created have all been the product of 10 people doing 2 people's worth of work...
That team changes every 6 month when another company offers more money. If only one or two people are working on a project, that's a high risk for the company.
If you got one or two highly skilled people in that team of 10, you are lucky. Managers don't want them to work alone on their project, they want them to help the team grow.
NGINX is the de facto standard today, but I can remember running servers off apache when I began professionally programming. I remember writing basic cross-broweser spas with script.aculous, and prototypejs in 2005, before bundlers and react and node.
Everything gets gradually replaced, eventually.
This is one reason maintainability is very important for the survival of a project. If it takes an extra person to maintain your build system or manage dependencies or... or... it makes it all the more fragile.
1 or 2 people maintain one particular software implementation of some of these standards.
It's interesting to think of what a large and massive community cheap and reliable computation and networking has created.
For example, how much of that code is the mail server component and how much is the http component? How much does http/1, or http/2 or http/3 take up? How much of that is necessary to keep the internet actually running?
I'm not suggesting it's trivial but the original perspective was highly overblown. To think of it another way, if these two men died tomorrow, how much of an impact would it actually have? Some, to be sure, but the internet wouldn't even notice.
If 2 people are operating the plants, that’s terrifying.
(Obviously once bigco buys such a startup's offering, that startup needs to hire, fast)
But that is peanuts and for me basically no difference than B2C and that is not something you can put on "customers that trusted us" banner on your landing page.
If you want big company to rely on your services and have 50-100 users each seat paid $500 a month form a single company, that is not just some manager swiping CC and for that you have to have a team and business continuity.
Traefik Opensource is my go to for almost all of my use cases theses days and I have never stopped and said hmmm I wonder if Apache would do better here. It is that good.
For NGinx I have been able to make use of HAProxy and Apache just fine. Long ago Apache was slower than NGinx but ever since APR 1.7 and Apache 2.4 there are about the same performance wise. Some here don't like the configuration syntax but I am used to it.
There is JetBrains, for example.
But there is also core-js which is a little polyfill library being used by like way more than half of high profile websites. Also written by a Russian national.
If you excise all contributions by Russian nationals to PostgreSQL or the Linux kernel, they will be left in a not very runnable state, I’m afraid.
On the other hand, it’s not like you are giving them money directly, unless you do; I also can see that in, say, both Linux and PostgreSQL there is also enough people from the “geopolitical opposition” so that even if the Russian contributors are asked by some stern people from the Apparat to sneak something backdoory in, it will be sniffed rather quickly and prevented from going much further.
So tl;dr is that there is no simple response.
There is also the mighty bystander effect at play: surely, someone else is going to look at it. Someone else will have time to test it. He's our hero, the Someone-Else-Man!
Mind you, it only takes to catch you once, and your mountain of reputation will poof out of existence in an eyeblink. This is the price.
Mind you, asking to downplay a vulnerability "because it's in an experimental module not built by default" would make me suspicious on the simple grounds that even if a module is experimental, you ship it alongside your stable code, and for sure someone builds it and is using it. Depending on who those users might be, there could be also parties interested in them not patching the vulnerability for as long as possible.
This sounds paranoid for sure, but your being paranoid doesn't mean there's nobody out to get you!
It’s sometimes beneficial to pose as an EU-based business if a purely Russian business was either sanctioned or considered too risky/dirty/shady to deal with.
So while Czechs don’t like to be equated with Russians, not all of them would quite sing „běž domů, Ivane” or share the feeling.
Nginx loves to pretend it’s 1995. It barely has http3 support and does insanely stupid things by default.
No wonder people move to haproxy, Traefik, caddy, etc. Cloudflare doesn’t use it anymore for good reason.
Not sure if serious, but you do realise that free is not at all about having a GitHub page?
Maxim has been working on nginx for years and just forked the project so that he can continue working on it. The license remains the same as the original nginx project and you can already download its sources here: https://freenginx.org/en/download.html
"I don't always git clone, but when I do, it's hg clone"
I’ve been looking for the words to put to that feeling myself but was unable to pinpoint it so well.
I loved GitHub at first. “Look at all the cool stuff I made” was kinda a way of showing my capabilities (and is still a great way today!) but somewhere along the way it became a platform for egos and star stroking and blind following into the nights. They improved their search but it could be so much better. Not everyone has a graphic designer on staff to make pretty README.md’s
Yep - I remember what happened with Source Forge/VA Linux. I actually paid for github when it first came out, just to fund it.
Still makes me nervous tbh
Some people just don't have a clue and only know buzzwords