Am I incorrect in assuming that you could implement your entire server-side js app now as an nginScript module? Do people think that is a good thing?
Not to mention that putting more interpreters and more end-user code into a system that has access to your service's private key might not be terribly wise.
I'm sure many people will tell me I'm wrong, and I guess I can see some benefit to simplifying configuration and perhaps deployment.
But there's a reason we've mostly moved away from deploying embedded PHP applications inside of mod_php.
The point is to make nginx's configuration dynamic and prevent bloating applications with stuff that belongs at the (lets call it) devops level.
Now, I also don't think nginScript is such a good idea. But because they seem to be building their own JavaScript VM for it. I believe this is a waste of effort and more of a JavaScript-all-the-things than anything else.
Lua is a very simple language, the VM is small and fast and for the "dynamic configuration" scenario one hardly codes more than a few lines (I've done quite a few things and the total line count is in the low 100's).
One simple example that I would love to use this for: generating and adding a UUIDv4 to every request's headers. Doing so would allow us to append the UUID to virtually every log in our entire stack. Right now there is no easy out of box solution for this in nginx. With scripting capabilities it becomes trivial.
However, whether or not Lua was enough and adding JavaScript is overkill, I'm not sure.
They're welcome to spend time building their own JS VM, it's their project - and while I don't think this is intended to appeal to the existing userbase, I think it will attract an entire new userbase, which will further nginx as a webserver and aid the goals of those who use it (faster. faster. more speed. faster. fix the bugs.) as they get more use, more paid use, and thereby more developer time.
I guess nginScript is mainly an outreach thing. Apparently nginx developers have decided that all those UI developers are just more comfortable writing JS than anything else. What indeed raises some red flags is that it is a subset of the language running on custom VM. So it is in fact a JS dialect which will still require some amount of learning to use it effectively (no free javascript lunch here).
No, let's not call it that, because I have no idea what you mean when you say that.
Disclaimer: we built one of the biggest Lua+Nginx project [1]
So yes while this is just a continuation of a bad idea, it's a rather substantial continuation. I fear that a lot more people will make use of this than make use of nginx's Lua support.
This is kind-of already possible with OpenResty: https://openresty.org/
You can also use Lua, a programming language more than capable of producing efficient web application backends, directly in Nginx config files: https://www.nginx.com/resources/wiki/modules/lua/
So...in a way, this has really been possible for a long time. I'm not disagreeing with you, because personally I'd much rather build my web app in a language and toolchain that's easier to work with, but I find it interesting to read about.
It's possible that Duktape was still too early in its development to use when Nginx started this project, but even then it seems like they could have collaborated and saved a lot of man hours.
I wonder if there a good reason why Nginx didn't use Duktape, or could this be a case of NiH where some Nginx dev got excited about the opportunity to build a new superfast JavaScript implementation just for Nginx? Surely it would have been less work to integrate the two event loops and use Duktape's (well-documented) API to build whatever features they wanted?
That said, although I've built significant async programs in other languages, I've never done anything in C on this scale, so take my words with a grain of salt.
So many better choices out there.
For now. I get the feeling that soon enough a lot of them will start dying out. Trying to "sell" some language other than JS in a mostly-JS shop is already an impossible battle. Popularity is used as a counter-argument for everything.
Interesting pattern I see lately is industry pundits on the CIO side advocate Javascript for inhouse tooling. I guess they have not much experience with programming languages.
With JS making inroads into CIO territory, we'll see a much higher usage in the future. And a much larger fallout with tons of unmaintainable legacy code.
It's like the times when everything needed Java, like Oracle added Java into their DBs for stored procedures. I guess Oracle will add JS too (or have they already?).
So what you're saying doesn't make any sense for anything remotely performance sensitive.
There are plenty of good minimal languages, from embedded scheme varieties to luajit to io. If that's what you mean by a DSL (embedded minimalist scheme with some nginx-specific functions and variables, for instance) then I agree, but they chose the javascript path for a good reason: it has a healthy community and ecosystem, and while the performance won't be top notch, I don't think it'll matter.
The big HUGE win for lua in this, in my view, is the plethora of packages on luarocks. With simple scripts, as presented in this blog posts, sure a javascript subset is fine. But suppose you want to interact with redis from your script? Where's the c ffi interface, and a prepared package you can use from nginscript? Grap hiredis bindings from luarocks, and your set.
Second, great yet another javascript implementation. They're very open about supporting a subset out of the gate, who knows how long it would take to reach parity with es 5 or 6 even?
But network-related libraries shouldn't be pulled from luarocks (unless maybe they have resty in the name). One would want to use lua-resty-redis and not hiredis within OpenResty. The 'resty' libs use the nginx cosocket library so work asyncronously with nginx core. hiredis would block the worker threads.
My eyes start to bleed when I imagine what some cowboys will implement on top of that.
I'm curious to see the impact of this strategy on performance.
https://insights.ubuntu.com/2015/05/18/lxd-crushes-kvm-in-de...
This is something I would've expected from a "Show HN: Embedded JS in Nginx" post, meaning it has potential to be a project done just to see what can be done, that everyone could say "Hey, that's cool" and then never use, because it's a terrible idea.
Instead, it's presented as a reasonable way of moving forward, when they could've pushed for their own much more reasonable alternative. They've done more work for a worse idea, effectively out-competing their own feature with a much more popular but crappy alternative.
I'd rather added support for the safe subset of some statically-typed, high performance language compiled to LLVM. This language must be without asyncronous GC, so its memory use will be predicatable under high load.
Javascript on server side is so vogue-ish. Vogue will change soon but uglyness of architectural decision will stay with Nginx forever.
The combination of FUD and marketing $$$ is being used to "encourage" more people to migrate (or use) nginx, as an "open source" alternative, when it's obvious that "open source" is being used mostly for the PR aspect and not so much for the community-focus and community-led aspects which is really core to "true" Open Source.
(Which obviously is sad as in my humble opionion for various reasons Lua is the better embedded language. I hoped it could become a widely adopted standard, we had great success with Lua in Redis in the past).
EDIT:
I've read through your comments and you seem like a pretty experienced developer. I don't think any of the information in my comment should be news to you so I'm curious to hear more about why you feel this way.
I like nginx, but it has way too much of a sacred cow treatment by the dev community. It has plenty of problems, the configuration is a psuedo-language that doesn't always make the right choices and is difficult to heavily customize, and I've gotten to it be -very- unstable under certain circumstances, including really bread-and-butter things like SSL caching. If there's a bug, you'll have a good old time debugging it's massive collection of C code. It's great, but it's not perfect.
Making nginx do custom things that you'll probably need to do in a serious environment (example: dynamically programmable SSL SNI) requires craxy mods and hacks that have only recently been made available (by third parties) and heavily reduce nginx's performance. Further, they only provide purgable proxy caching via their commercial version, which costs an exorbitant amount of money. The free purger, naturally, makes nginx lock up. I wouldn't mind chipping in a bit for nginx because I want to support their team any way, but at their current prices ($100/node/month or something like that) we simply can't afford it.
I realize this is not a popular opinion right now, but node.js is completely up to the task of running a reverse http proxy. They are basically (you likely won't notice the difference unless you're running the New York Times) competitive with nginx for performance, and as a tradeoff for an unnoticable slowdown you get a full, turing complete programming language to completely control the flow of your data. Nginx under the hood is just a reactor pattern with children that share a socket. Node.js has a cluster module that uses the exact same strategy. Mind you this is from someone that has done talks critical of reactor pattern scaling.
Also, if you have blocking I/O apps, it doesn't matter what you configure nginx to do, it's still going to lock up when someone DDoSes it with slow loris connections. Make your ruby app thread safe and use Rainbows! instead of Unicorn, or you're going to have a bad time.
The good embeddable web servers are usually pretty lightweight, scalable and can be programmatically configured. Things like Jetty are popular, but look at languages like Go that have HTTP serving built in via libraries and scale nicely via coroutines.
Vert.x etc. are cool for performance reasons, being lightweight and usually much less thread hungry (using async operations, sometimes in many less threads).
That said, I do agree that reverse proxies are still really useful for all the reasons you mentioned. Reverse proxies on top of some of these high performing embedded HTTP serving engines is a good practice, when you need it.
And there is no need to throw out the tried and true engines, like Apache, Nginx, etc.
Just depends on the use case and needs I suppose.
Granted, it's more complexity, but nginx certainly isn't the must-have that it used to be.
I should say that I haven't ever really tried this at production scale. My background is mostly consulting wherein my responsibility is to deliver a provably working solution for someone else to manage and operate. So there's my bias in this.
Other commenters have made a lot of the points I would. You can easily handle TLS in Java or JavaScript. Or you can terminate with an ELB as I usually do. A lot of load can be pushed to a CDN.
But really, I'm not convinced it would be that much slower. I know this dated, but a simple apache bench test shows Tomcat outperforming httpd for static assets [1]. I've never had a site that was remotely bottlenecked by static assets, but I've had many bugs due to obtuse mod_rewrite configs. It's cheaper to have to fewer bugs than to spin another server.
[1] http://www.devshed.com/c/a/BrainDump/Tomcat-Benchmark-Proced...
You'd rather use a dedicated load balancer like Route53 or haproxy. Don't think choosing Apache or Nginx is the right option for those really. Plus something like vert.x is very usable as a load balancer already.
>SSL Termination
Just about everything handles this already. Current best practice is to use SSL for all communication between your own servers anyway, so there's no gain. If you SSL terminate on your load balancer, these days you want to use a new SSL connection between your load balancer and application server anyway if possible.
> serving static content, caching, compression
New app servers like vert.x support Linux sendfile and handle very well for serving static content etc. Currently, nearly everyone uses Cloudflare to handle all of this anyway. No real reason to duplicate it if Cloudflare is set to handle it.
> centralized logging
Centralized logging is usually done by sending all of your logs off from each service/server to be aggregated on a dedicated box running Logstash or whatever. You don't use your reverse proxy for this?
>using different applications on the same ur space
From using the web, I don't think this is done anymore. In fact it seems to be the opposite - foo1.app.com, foo2.app.com seems to be the trend. Basically the opposite of multiple apps on 1 domain because of the big move towards microservices. Extra domains are the cheapest thing there is.
> added benefit of another layer of security
Security doesn't work that way in my experience. It's more about minimizing attack surface. If you use node and nginx and apache then any exploit that hits any of those 3 will hit you. If you only use node, then you can only get hit by exploits on node. So I'd argue it's the opposite. The more layers, the less secure.
> nginx/apache are really good at what they do
Sure, but you need to find the most efficient tool to handle your needs with the least amount of complexity. Only add something if it solves an issue that you can't solve in a simpler way just as well.
I can see where he's coming from. But I still slightly disagree with the feeling.
The basic idea is a DSL that "happens" to have the same syntax than Javascript. I guess that this VM has a lot of optimisations related to its purpose toward nginx (.e.g. no GC overhead, since each JS context is supposed to be short lived and tied to a unique request).
bearing in mind mod_perl's original use was not for writing applications, it was for messing with apache, for doing the things you can't easily do in the config. but where there's a way, abuse will follow, and before you know it whole apps will be written with this
ah well, those who don't learn from history are doomed to repeat it.
so, while it may seem like a good idea now, you can bet in 5-10 years it will no longer look so clever
Nginx already has LUA scripting support.
Disclaimer: I work at NGINX.
Personal feelings of Javascript aside (not my favorite either!), I think this is a great business move (adoption, excitement, blog-o-sphere marketing, even if some users shoot themselves in the foot), and I think it opens up exciting possibilities, including creating Nginx+ features fo'free.
The antipathy is a symptom of our JavaScript disease. We have grown tired of this affliction. We understand now what makes it less great than we once thought. The churn of rapidly growing and devolving JS frameworks, the slog of awful design-by-committee processes putting the language together... it is nothing compared to Lua's simplicity.
And some clever fellows understood this long ago, on a site much like this one. Feel free to take a look. https://news.ycombinator.com/item?id=7890685