They play to the strengths of the virtual dom approach by manipulating the benchmark via dom instead of what ever interface was implemented within each TodoMVC implementation.
So I forked the benchmark and changed both the Backbone and Mercury implementations to work through their respective apis.
Here it is: https://github.com/smelnikov/todomvc-perf-comparison
as you can see the giant gap between Backbone and mercury is now gone while both tests perform the exact same tasks. (feel free to step through it in the debugger to see for yourself)
Here's my commit log: https://github.com/smelnikov/todomvc-perf-comparison/commit/...
Note: I've added a new method to the Backbone implementation for toggling completed state as the old one was horribly in-efficient. This is not something inherit in Backbone but rather is specific to this TodoMVC implementation. See my comments in the commit log.
Note 2: Exoskeleton, which is basically backbone without the jQuery dependency is roughly 2-3x faster than vanilla backbone, I'm going to predict that it will actually be significantly faster than mercury.
Note 3: I think the virtual dom is great and seemingly has many benefits but I feel as though the speed benefit has been greatly exaggerated.
React originally was designed for developer efficiency and not performance. It is a port of XHP (in PHP) that we use at Facebook to build the entire front-end and we're really happy with. It turns out that the virtual dom and diff algorithms have good properties in term of performance at scale. If you have ideas in how we can communicate it better, please let me know :)
This benchmark was taken from the webkit source code then forked into http://vuejs.org/perf/ then forked to include mercury then forked again to include elm.
Neither elm nor mercury came up with this benchmark and just added themself to it.
What this benchmarks shows is that async rendering is really fast. Mercury, vue & elm all use async rendering where DOM effects are batched and only applied once.
A better, close to real user experience benchmark would be one using mousemove since that's an event that can happen multiple times per frame.
The way that the Backbone TodoView is designed does not take into account the possibility of a user adding 100 items using the dom within a tight loop. Probably because such a use case is impossible outside of this type of benchmark. By doing so the Backbone implementation ends up performing a lot of unnecessary renders. Therefore as far as Backbone performance is concerned this benchmark is not indicative of any real world scenario.
Just to re-iterate; when you're loading a set of todos from your local storage to display when the user first opens the page, you would not populate the "new todo" input box and fake an enter event for each item that you want to add. Instead you would reset the Backbone.Collection with a list of your new todos (go through the interface). That's basically the change I made to the benchmark. Sorry if it wasn't clear.
See my proof of concept video: https://vimeo.com/100010922
And actually runnable example that you can edit without refresh: https://github.com/gaearon/react-hot-loader
I plan to write a blog post explaining how to integrate it into React project soon.
The problem is that it's very easy to force a full recalculate of the whole page layout. Whenever you call .offsetHeight or .offsetWidth or .getComputedStyle, you're doing it. The full list of properties is about 2 dozen strong:
http://gent.ilcore.com/2011/03/how-not-to-trigger-layout-in-...
Most web developers don't know this, and so they're actively making their pages slow. Worse, many popular frameworks build this into the library, and so if you use them, there is no way to keep your pages responsive. JQuery, for example, can easily cause 4-5 layouts with a single call to .css; on a mobile phone and a moderately complex page, that's about a second of CPU time.
Also, the fact is that the web is stuck in a Web Components-driven approach to building apps which is pretty orthogonal to how this works.
I haven't dug deep enough to form an educated opinion on the language, but so far its refreshing to see such a drastically different process than popular languages today.
I've just had a look at the code and React is using a localstorage backend, while Backbone is using its models + collections with a localstorage extension... so I'd expect there to at least be some overhead there, but apparently not.
Does anyone have any quick thoughts on what might be happening here? I can't shake the feeling that these benchmarks might not be terribly useful.
Not by default because it's missing laziness and immutability: because just about everything in javascript is mutable, React can't prune out the changeset as the developers could be modifying the application state behind its back in ways unknown (or worse, could be using state which is not under React's control, neither props nor state, but globals and stuff).
That is, for safety/correctness reasons the default `shouldComponentUpdate` is `function () { return true; }`. The result is React has to re-render the whole virtual DOM tree (by default) and the only possible gain is when diffing the real and virtual trees.
Because Clojurescript and Elm are based on immutable structures they can take the opposite default (and if you're going behind their back and mutating stuff it's your problem).
Also, I'm not sure React defers rendering until animationFrame.
An optimized React application (or at least one which uses PureRenderMixin[0]) ought have closer performances to the others's (Om is a bunch of abstractions over React after all, so you should be able to have at least as good performances in pure React as you get in Om).
[0] http://facebook.github.io/react/docs/pure-render-mixin.html
* Development React is slower than production React. There are a bunch of extra checks all over the place along with a profiler [1].
[1] http://facebook.github.io/react/docs/perf.html
* Speed isn't the top priority of the framework, predictability is. There's a virtual event infrastructure and other browser normalization work going on. Om is using React under the hood and more or less represents the best case scenario.
* React isn't magically fast. The diff process still has to visit all the nodes in the virtual DOM, generate the edit list, and apply it, which is a substantial amount of overhead. I'm used to seeing React even or behind when benchmarked with a small number of nodes. The explanation I've seen is that these benchmarks aren't considered important since the goal isn't to be as fast as possible but rather to never be slower than 16ms.
The trick behind most of the "React is fast" articles is that React is O(N_dom) instead of O(N_model) so if you can shrink the size of the output DOM, React goes faster. The Om line demonstrates this and doing screen-sized render over a sliding window of data in a huge data set (most grid/scrolling list demos) is another common example. There are perf knobs that probably aren't being turned here but if the app renders fast enough why would you waste your time turning them?
I consistently got the result of Angular utterly and completely destroying React. Initially I blamed the virtual DOM approach, but after seeing other frameworks utilizing it and outperforming Angular by a huge margin, it seems to me that React is not written for performing well on small DOM documents. (There might be a turning point, considering how bloated Facebook pages it was designed for.)
My limited understanding of React is that it fails in (a), (b) and (c), and only limited measures can be applied to improve them. Re-creating the entire DOM on each update probably does not help. I have no information if (d) is possible with it.
I am using Angular.dart for a while now, and it can be used to get all of them in an optimal way.
Disclaimer: I'm working at Google.
[1] http://swannodette.github.io/2013/12/17/the-future-of-javasc...
These is certainly something wrong with the benchmark. Since Om is a layer on top of React, it is obvious that React itself cannot be necessarily slower than Om. (Perhaps idiomatic React usage is slower than idiomatic Om usage for this case, though?)
edit: Rather, see masklinn's comment that describes what actually happens. Point being, vanilla React does extra work to account for anything a developer might do, but allows Elm and Om, which have more rigorous standards for their users, to override that behavior.
The same in PureScript would be
profile user = mkUI spec do div [ className "profile" ] [ img [ src user.picture ] , span [ text user.name ] ]
See, types everywhere. https://github.com/purescript-contrib/purescript-react/blob/...
Disclaimer: I'm the author of Vue.js. The benchmark is a fork of a fork of the Vue.js perf benchmark (http://vuejs.org/perf/).
React looks interesting, but only if it gives significant advantages (time-to-market, maintainability, etc) vis-a-vis AngularJS in managing a large code-base.
Any first hand reviews?
(I'm not affiliated, just a very happy user.)
I hear you about the API surface. Currentlu AngularJS has too many weird/new concepts.
node : String -> [Attribute] -> [CssProperty] -> [Html] -> Html
We could define a convenient function div, for example, with div : [Attribute] - > [CssProperty] -> [Html] -> Html
div = node "div"
that would let us say "div [] [] [text "Hello world"]" instead of "node "div" [] [] [text "Hello world"]". Of course, this doesn't fix your problem with the empty brackets. This can be fixed with something like: bareDiv : [Html] -> Html
bareDiv = div [] []
letting us do "bareDiv [text "Hello world"]"Update: the benchmark uses a correct implementation available here: https://github.com/evancz/todomvc-perf-comparison/tree/maste... so it was a false alarm on my part. Tried it out at https://rawgit.com/evancz/todomvc-perf-comparison/master/tod...
I hope the new framework gets animation right, I'd love to see it as flexible as in D3, in my opinion this is something currently React currently lacks (the transitions are there but they are too simplistic to cover complex web app cases).
I'm sure others are experimenting as well, so we should see more implementations soon.
Or is this the limit of an overly mutable language like js?
The Om example does some kind of Event delegation using channels which is much faster.
I am a bit concerned about the lack of typeclasses and what that could mean if I try and build something bigger using it. Maybe I could use Purescript and Elm together.