Memory usage went down drastically and response times got much better.
Additionally, I personally much prefer the syntax of the nginx configuration file (though this really is a matter of taste) and it generally feels like much more of the really cool stuff is built right in (I'm looking at you, memcached module).
As such, I love nginx and I would recommend it to anybody asking me what lightweight web server to use.
Note though that the performance improvements I listed above are not as such specific to nginx: You could probably achieve the same result with any other web server that isn't also your application server. What is specific to nginx is its stability, its nice (for my taste) configuration language and the availability of really interesting modules right in the core.
Congratulations to everybody for reaching 1.0. I'm looking forward to many happy years with your awesome tool in my utility belt.
tldr; I hate web server, but I like nginx.
I've seen crashes in lighty, crashes in fastcgi and subtle differences in behavior between fastcgi and mod_php.
FastCGI just wasn't a commonly used method of deployment back then, so there were for sure some bugs around that I didn't have time or interest to fix.
By now, there's PHP-FPM and fastcgi is much more common, so you could probably just hook php directly into nginx by now, but I didn't want to do experiments and I knew that apache worked, so that's what I used.
Just remember to turn off keep-alive in apache, btw.
https://nealpoole.com/blog/2011/04/setting-up-php-fastcgi-an...
Thanks for the keep-alive tip. I missed that one.
EDIT: I see that the author of the post I linked replied below- also with the link.
On the nginx side, something like my config: location ~ \.php { fastcgi_index index.php; include fastcgi_params; fastcgi_pass localhost:55155; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; }
Then you just need to run php in fcgi daemon mode which is built in, use php-cgi -b localhost:55155 to match that setup... more details on http://wiki.nginx.org/PHPFcgiExample
I found this much simpler than running nginx and apache...
tl:dr is that the configuration you're suggesting will leave a site open to arbitrary code execution if the site allows for user uploads.
One known failure. One. Because I was an idiot and tried to do something a little too clever in our configuration.
It's one of the best pieces of system software I've ever used. I can't thank Igor and the nginx community enough.
One of my favourite pieces of software, and the things you can do with Lua as a module make it an extremely flexible intelligent proxying service.
[1]: http://brainspl.at/articles/2006/08/23/nginx-my-new-favorite...
(His latest project cloudfoundry.com just launched today, so I think it's fairly relevant to give the guy some props right now :-)
Nginx proxy module only supports http 1.0 basically.
So no proxying to a websocket server yet.
http://www.squid-cache.org/mail-archive/squid-dev/200907/003...
If you want to skim, the really knowledgeable parties in the discussion are Henrik Nordstrom, Robert Collins, Alex Rousskov, Mark Nottingham, and Ian Hickson.
It's actually not as bad as I'd feared, but the protocol also obviously had many issues unresolved back then (and probably still, since security concerns led to it being disabled in Firefox 4 and Opera), as did the plausibility of implementing a proxy or a proxy cache that could support it.
But, to get back to nginx, it is also possible to selectively support 1.1 features without supporting the entire protocol. Squid has supported persistent connections for over a decade, but took years to get support for caching with ETag, ranges, and a number of MUST features, so it reported as HTTP/1.0 with additional capabilities (it might still do this, I haven't paid much attention since leaving the project). So, it seems plausible that someone could implement just the necessary features for WebSockets, without having to implement everything in HTTP/1.1.
Averaged more than 2 releases a month last year with bugfixes and features.
An awesome web server.
Dont get me wrong, I have several production sites on nginx - but the development methodology makes me very nervous.
Cherokee has some of the same objectives as nginx (fast and lightweight) and has a very open development process.
Plus, the Cherokee Market is very cool (http://cherokee-market.com/) !
Apache, nginx and Cherokee also work on Haiku as well as a bunch of other native ones (e.g. PoorMan http://en.wikipedia.org/wiki/PoorMan and RobinHood). nginx fits well with it because Haiku performs well under heavy load for responsiveness, is lightweight and fast.
Additionally, marketing likes to see the major number change (the "X" in the X.Y.Z). I think the public does too. They feel like they're getting more bang for their buck.
Beyond that, I've found both the built in modules and the addons to be excellent quality. I've instantaneously improved performance an order of magnitude for some applications just by dropping in nginx and a little bit of configuration. I've also created a global cdn that served hundreds of millions of video streams a month on a few commodity machines with nginx, all without a hitch.
Truly excellent software.
Lighty is about as fast and memory-friendly as nginx, but I found nginx to be more stable when dealing with FastCGI.
if ($args ~ post=140){
rewrite ^ http://example.com/ permanent;
}
Why is "^" used as a regex wildcard instead of ".*"? Thanks!It might also be more performant because the regex engine doesn't consume the whole string before finding a match..
When would you still pick Apache? I'd love to see answers here:
http://serverfault.com/questions/258980/what-does-nginx-lack...
Does this mean if a crash occurs, the whole process of nginx will crash? Whereas Apache's processes can be restarted?
/Genuinely curious
If one worker dies, any of the connections it's handling will close, but connections associated with the other worker processes stay alive. I'm not 100% sure, but I believe the master process will restart a worker if it dies.
All that said, I've never seen Nginx crash (except when playing around with an experimental third-party module - but never in production). For any production server, you should be using some form of monitoring daemon that will alert in case of a failure and automatically restart the web server process.
"Oh look, we're getting to the high 0.9's, better call the next one 1.0.0!"
Under this scheme, version numbers and the way they change convey meaning about the underlying code and what has been modified from one version to the next.
It's mentioned elsewhere in this thread, but it's not something I knew about beforehand so figured you may also find it helpful.