CPPSP (C++ Server Pages) which is putting up ridiculous numbers... here is the Single Query test:
https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...
It's quite different from the more typical implementations, where they all sort of look the same...
(Go) https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...
(NodeJS) https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...
(Gemini) https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...
Also interesting to compare it to C# / HttpListener... which would benefit from moving all the framework code out into a separate library;
(C#/HTTP.sys) https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...
If you're going for an EC2/digital ocean setup with a lot of small instances, then you want to go with something like vert.x or node or whatever - while if you are deploying directly onto bare metal high core/ram servers, you'd be better off with something that is better at handling high thread counts - something like Golang.
The point I was making is that your actual hardware and workload can turn this benchmark on its head. You may naively think you are upgrading performance by switching to a different framework/language, yet if you don't understand why each platform is getting the numbers it does you might end up rewriting your app and actually decreasing performance because of your server hardware.
It would be fun to see this project (https://github.com/TechEmpower/FrameworkBenchmarks) become more and more popular, with formidable developers squeezing out performance from their framework of choice.
Compare that to a language like Python or Ruby, where you need something "extra" to make it easier to do a web application. You could certainly do with just the standard library in other languages, but very few would choose that option, because it would involve write a lot of additional code.
I think it fair to include Go, because it's a language/programming environment, that comes with it's one built in web framework. A framework that's actually advanced enough that many don't need to look else where.
I know there should be some overhead when using a framework, but sometimes the cost is too high and it's useful to know it (compare php and symfony2, for example).
You can look at the Revel benchmark, a web framework written in Go which did quite well in the benchmark.
If anything to me this data confirms that Rails is an amazing tool because not only do you get to develop quickly, but you also get pretty good average latency (or at least the potential depending on what you add to your app in terms of 3rd party libraries). And what Rails isn't good at is throughput, which is almost never a problem for an early stage company.
Working at a startup it's a huge success if I ever have to handle a lot of connections to my app, but today and everyday, I want fast response times on a page load.
Realistically, I doubt many humans can distinguish between 1ms and 100-200ms. response times.
However, the purpose of this project is not actually to measure how quickly platforms and frameworks can execute fundamental/trivial operations. Rather, these tasks are a proxy for real-world applications. Across the board, we can reasonably assume that real applications will perform 10x, 50x, 100x, or even slower than these tests. The question is, where does that put your application? If your application runs 100x slower than your platform/framework, does that put your application's response time at 200ms or 2,000ms?
That's a difference users do notice.
I wonder why, the Fortune 500 sites we have built are handling the load quite well.
That, and JSON serialization on .Net using default MS serializer is super slow. Everyone uses JSON.NET or another faster serializer in the real world.
For smaller projects, or for companies / people with tight budgets, these performance tests matter more, though the biggest wins still lie in cacheing and load balancing, not in platform efficiency. This can depend on the nature of the application though. Some have tons of cacheable content, some have tons of dynamic content.
I've built plenty of .NET-based services, and generally it was very powerful hardware serving a relatively small user base, and where expectations were much less demanding. And that's perfectly fine if the other benefits of the system (tooling, integration, etc) works for the implementation.
For someone building a startup on a shoe-string budget, though, it has to be foreboding seeing such poor metrics when that directly translates into considerable additional hosting expenses.
Unfortunately, I still have not had enough time to improve Jester (or this benchmark) so its performance is still at the stage that it was on in the previous rounds. Hopefully this will change soon. Of course help is always welcome, so if you want to see Nimrod higher in the results then please help us improve the benchmarks!
Maybe something that ties together things like ebean ORM, Jetty, Jersey, Jackson, Guice. Dropwizard is the right idea, but is geared towards building REST backends.
Any suggestions on a pure Java framework that has critical mass and would fit the bill?
* Java as a first-class citizen
* Strong core of basic web app functionality
* REST and Search engine friendly URLs
* Action oriented – basic framework for routes, MVC etc
* Stateless
* Good documentation, active community
If we look at action frameworks only:
* Play 2: Great except it's Scala. Ruled out.
* Spring MVC: Spring is bloated old-school Java with Hibernate. Out.
* Stripes: hasn’t had a commit in over a year… which is unfortunate because it looks interesting. Out.
* Spark: appears to be a one-person project. Out.
* Google Sitebricks – ditto
* Ninja: Ditto
The Play guys went to great trouble to ensure that both Java and Scala are fully supported. Perhaps consider being a bit more open-minded about your options. Scala is simply a more modern and flexible language, so I don't blame them for using it.
But in general I agree that at the moment there aren't a lot of web framework that fit your description in the java world.
If you don't even consider Erlang you won't miss it. But if you know it has some strengths for this kind of job and you don't mind the syntax, you'd like to see it compared to other solutions.
Rest assured, "get erlang running again" tops my 'todo' list for round 9.
This blog post describes this in a lot more depth:
http://jlouisramblings.blogspot.com/2009/01/common-erlang-mi...
It it silly that such an rich and awesome set of benchmarks never pushes on concurrency, one of the major points of failure "in the wild" -- more common as you become the go-between for your users and some set of APIs -- users stack up on one side, waiting connections stack up on the other.
Until we have such a test type, there is no value in exercising higher concurrency levels. Outside of a few frameworks that have systemic difficulty utilizing all available CPU cores, all tests are fully CPU saturated by the existing tests.
With that condition, additional concurrency would only stress-test servers' inbound request queue capacity and cause some with shorter queues to generate 500 responses. Even at our 256 concurrency (maximum for all but the plaintext test), many servers' request queues are tapped out and they cope with this by responding with 500s.
The existing tests are all about processing requests as quickly as possible and moving onto the next request. When we have a future test type that by design allows requests to idle for a period of time, higher concurrency levels will be necessary to fully saturate the CPU.
Presently, the Plaintext test spans to higher concurrency levels because the workload is utterly trivial and some frameworks are not CPU constrained at 256 concurrency on our i7 hardware. As for the EC2 instances, their much smaller CPU capacity means the higher-concurrency tests are fairly moot. If you switch to the data-table for Plaintext, you can see that the higher concurrency levels are roughly equivalent to 256 concurrency on EC2.
For example, jetty-servlet on EC2 m1.large:
256 concurrency: 51,418
1,024 concurrency: 44,615
4,096 concurrency: 49,903
16,384 concurrency: 50,117
The EC2 m1.large virtual CPU cores are saturated at all tested concurrency levels.jetty-servlet on i7:
256 concurrency: 320,543
1,024 concurrency: 396,285
4,096 concurrency: 432,456
16,384 concurrency: 448,947
The i7 CPU cores are not saturated at 256 concurrency, and reach saturation at 16,384 concurrency.We are not against high-concurrency tests; we are just not interested in high-concurrency tests where they would add no value. We're trying to find where the maximum capacity of frameworks is, not how frameworks behave after they reach maximum capacity. We know that they tend to send 500s after they reach maximum capacity. That's not very interesting.
All that said, once we have an environment set up that can do continuous running of the tests, I'll be more amenable to a wider variety of test variables (such as higher concurrency for already CPU-saturated test types) because the amount of time to execute a full run will no longer matter as much.
[1] https://github.com/TechEmpower/FrameworkBenchmarks/issues/13...
Yeah I can see that being more useful.
If the server is not flooded with concurrent requests and there are only 20 concurrent requests and then, put an file with a TCP socket in Python on it and it will do the job. They should all be long running at least at 10k concurrency.
Longer or even persistent (websocket) connections should be looked at. Hit them all with 20k connections, some very long lived. They don't have to come at the exact same microsecond, but they should come in pretty close and not do just a plaintext file read and close. They should be longer leaved. How about something as long as "validating your credit card" spinner some shopping websites make you wait for when you click "process payment" button. Then you don't know if you should refresh the page or if you do will you be double charged. That kind of stuff. Or say there is story written by pg talking about startups fighting NSA using Go hits HN and a flood of requests bring the server to its knees.
Why bother having nice benchmarks? What are they showing? CPU loading, so user can save money on compute time at Amazon, that's OK I guess. But it can be made more interesting.
I'd still like to see a good showing from Django, maybe using uWSGI + Nginx. I might submit a pull request and see if I can't get that included in the next round. Gunicorn is great and incredibly easy to set up, but pales in comparison to other platforms when it comes to raw speed.
[0]https://github.com/TechEmpower/FrameworkBenchmarks/tree/mast...
edit: I meant in the chart, at a glance
Also note that the particular Rack test that performs very well is running a very small amount of Ruby code. Thanks to these improvements, however, rails-jruby now consistently tops rails-ruby, if only by a small amount.
See more on TorqBox: http://torquebox.org/news/2013/12/04/torquebox-next-generati...
Around round 6 of these benchmarks, I ditched Scala altogether (and also my framework).The reason I ditched Scala was not because of it's performance, etc. But it was because I was the only developer in my company who knew and learnt Scala after reading a couple of books (one was around 800 pages). Obviously, I needed a language that any other developer should have no problem taking over, and Scala developers are 1)expensive 2)not easy to find. Also, Slick (the database interacting code for Scala by Typesafe) wasn't mature yet.
For this reason, around Round 6, I started writing my own framework in GoLang and used it internally as an 'auxiliary framework'. I will explain more about this framework soon soon. In my company, we have about a handful of backend programmers and a couple of frontend devs. I found that GoLang was much much easier to teach my programmers, than say I could teach Scala. Please note - Scala is a brilliant functional programming language, but if you are thinking switching from Ruby/Python/etc would be easy, then you are wrong.
Now, we have a workflow that allows us to deliver as quickly as possible, but without missing out on performance - We write our entire V1 in Rails. We implement all the UI/frontend related code and then port it to our GoLang framework. We have an internal generator where we just feed our rails app, and the code for our framework is just 'ported'/generated on the fly based on our framework and we just deploy it. So far, our productivity is slightly lost while handling the type conversions, bugs, etc. But it's totally worth it. Go outperforms Rails by a huge margin. I noticed that using something like Puma helps a lot, but it still is no way comparable to our GoLang framework.
As for our framework, it's just pretty simple - Just organize all the files as you would in a Rails application (Models/Views/Controllers/Config) and everything just works without much performance hiccups. We use Gorilla components for stuff like routing and cookies. The rest of the stuff is slightly adapted from other frameworks (like Martini).
All in all, I love the ability to have JVM like performance with the productivity of Ruby with a language like GoLang. And this round 8 benchmark is nothing short of impressive. If you haven't tried GoLang yet, you should try writing your own framework, not only do you learn about all the trade-offs for the 'magic' that rails makes under the hood, you also learn about some new stuff and thus become a better programmer.
I think GoLang is pretty impressive if someone as average as me can even write a framework like Rails, except for better performance. Give it a try, people, you won't be disappointed.
Why ? Wouldn't time be better spent learning a language on the JVM that has a whole array of stable, well-tested, production ready frameworks i.e. all of them.
Switching from the JVM to Go is like taking 1 step forward and 100 steps back.
I can understand learning and using go for some things, but companies are moving major infrastructure to it with staff that are still learning it.
I completed the "Introduction to Programming in Go" in under 3 hours and in less than 6 hours I was able to code a full-fledged application. I cannot say/vouch the same for Java or Scala.
I would like to have an enterprise-level language inside my company without the complexities associated. I think goLang solves my problem and hence I use it.
I love the JVM, it's fast, sturdy, reliable. But throw in more JAVA developers at it, no matter how good, you end up with half-baked code, unused classes and unwanted complexity. I wish I could throw in more Scala developers, but it's not possible at the moment within my financial constraints.
Almost 100% of the developers we hire know C/C++ well, so it's much much easier to teach them GoLang, than say, Java. And that is a huge time and money well saved for me.
>If you haven't tried GoLang yet, you should try writing your own framework.
I only say this because I want people to understand how incredibly simple the GoLang is.
Hope this helps.
But of course, you have to first write about 300 lines of XML to wire up the various BeanInversionContainerFactoryDependencyInjectors.
Java is a needlessly verbose, death-by-pattern-programming monstrosity. Go is a fresh look on programming in general. The standard library is phenomenal, built-in concurrency is excellent, and it's an extremely productive environment to be in. It feels like driving a Mazda Miata vs a Ford F-250.
Different strokes for different folks, I guess.
While Clojure surely has a bigger learning curve than Go, it's much simpler and more approachable than Scala. I've learned it recently and am an absolute convert. It seems perfect for your use case and you could even skip writing the prototype in Rails because you'll be just as productive in Clojure.
Note that I'm not trying to convince you to change; you obviously found something that works for you. But I am curious if there were obstacles to using Clojure (missing libraries? poor tutorials?) and if so, how that could be fixed.
In the plaintext search Go only comes in at 13th: http://www.techempower.com/benchmarks/#section=data-r8&hw=i7...
Its really frustrating that a search engine company would use such an unsearchable name for a new product.
That said, I rewrote my app in Go and I'm very happy with the performance, stability and testability. The recently announced go 'cover' tool is very useful and a breeze to use.
[1] Here are my benchmarks: https://docs.google.com/spreadsheet/ccc?key=0AhlslT1P32MzdGR... (includes codepad.org links to the source for each benchmark)
I'm ok with getting out the door response times of 8-15ms while serving 20,000 unique hits a day on a $5/month VPS. The server does not even break a sweat too and it's doing more than serving the app too.
That's still not terrible though and it could easily improve by massive amounts with a stronger server. I have not gone crazy with profiling either. Just using fairly basic cache blocks when applicable.
http://www.techempower.com/blog/2013/03/28/frameworks-round-...
Edit: every category but the plaintext benchmark.
Curious - any reason why you guys don't have ASP.NET tests in Windows with SQL Server? I fiddled with the filters and found none.
Update: Never mind. I see it now. You don't have Windows tests on EC2.
AFAICT some of the larger frameworks by default do a bunch of stuff (csrf and ip spoof checks, session management, etag generation based on content etc) that simpler solutions don't, but this things can usually be turned off.
Barebones frameworks of the same language are generally going to out perform heavier frameworks. Feature count/matrixes are not taken into consideration for these benchmarks.
Of course barebones platforms will be faster, but doing unnecessary work is a different thing.