Speaking of optimizing method calls: now that it's been a few years, I wonder what Ruby folks think about refinements. Are you using them? Are they helpful? Horrible?
I remember reading from the JRuby folks that refinements would make Ruby method calls slower---and not just refined calls, but all method calls [1], although it sounds like that changed some before they were released [2]. It seems like people stopped talking about this after they came out, so I'm wondering if refinements are still a challenge when optimizing Ruby? I guess MRI JIT will face the same challenges as the Java implementation?
It might seem strange that they lead with this new feature, but `yield_self` can greatly improve the Ruby chainsaw, and `then` makes it more accessible.
The style of writing a Ruby method as series of chained statements has a non-trivial effect on readability and conciseness. `then` lets you stick an arbitrary function anywhere in the chain. They become more flexible and composable, with less need to interrupt them with intermediate variables that break the flow.
I've been using `ergo` from Ruby Facets for years ... largely the same thing ... and the more I used it the more readable I find my old code now. Funny how adding one very simple method can have more effect than so many other complex, high effort changes.
An order of magnitude as in .. 10x? This seems too good to be true. Half the arguments against Rails melt away like butter if that's truly the case.
Anyone with a better understanding of the details care to comment on the likelihood of these performance gains being actually realised, and if not, what we might realistically expect?
Between Ruby 1.8 and 2.5, performance has improved around 13x in tight loops[2]. The Rails performance issue has been massively overblown since 1.9 was released.
Ruby 1.8 was a tree walking interpreter, so the move to a bytecode VM in 1.9 was a huge leap in performance. Twitter bailed to the JVM before moving to 1.9. A lot of those 10-100x performance differences to the JVM are gone thanks to the bytecode VM and generational GC.
Bytecode VMs all have the same fundamental problem of instruction dispatch overhead, they're basically executing different C functions depending on input.
Doing _anything_ to reduce this improves performance dramatically, even just spitting out the instruction source code into a giant C function, compiling it, and calling that in place of the original method. Another 10x improvement on tight loops should not be a problem.
[1] https://www.techempower.com/benchmarks/#section=data-r15&hw=...
[2] https://github.com/mame/optcarrot/blob/master/doc/benchmark....
Nor did i know that twitter jumped out of rails before ruby got performant. Which means the argument that twitter outgrew rails isn't so correct anymore.
still, thanks for this insightful comment.
Yeah no kidding.. https://samsaffron.com/archive/2018/06/01/an-analysis-of-mem...
With 2.6 and sorbet [1] coming down the line, it's exciting to be a Rubyist again!
It does if you ignore the overhead of JIT compilation itself. However, my understanding is that writing a JIT implementation that performs better than a good interpreter is surprisingly difficult. You have to have a lot of complicated logic for tracking hotspots and using JIT judiciously in short-running scripts.
https://www.techempower.com/benchmarks/#section=test&runid=a...
Another big win is the bootsnap gem, which is a cache of previous VM runs that loads faster than parsing all invariant pieces of code again.
Golang plus gin, sure. However there are other Go frameworks on the charts that blast the Ruby competition out of the water. Ruby isn't really on the podium at all with C, C++, Rust, Golang, C#, and Java about an order of magnitude out in the lead on fortunes.
Martini isn't much of a framework itself either, so lets forget the full featured nonsense. Almost none of the ecosystem is in play with these benchmarks. You could build a system up around fasthttp just as well as net/http, and ASP.NET certainly can't be accused of being a for-purpose contender.
The most impressive thing IMHO is how well Ruby is doing on maximum latency. I can't quite reconcile that considering fasthttp is pretty much zero-allocation and golangs stop the world is in the microseconds.. Pretty impressive.
Ruby (MRI) will have to reinvent the wheel in order to get a panoply of optimizations that some very smart people have already baked in: like the ability to target almost any platform from the same library.. GCC requires cross-compiling per target.
Test suite still passes on it though, so upgrading shouldn't be a huge deal at least へ‿(ツ)‿ㄏ
Once he finished his doctorate at the EPFL, off to Stripe he went, bye bye Scala. Tough industry, on the one hand Scala benefits from a revolving door of high level EPFL doctoral students, and on the other the talent pool shifts around as students come and go.
Money talks, companies like Stripe have a leg up in that they can fund full-time engineers to work on projects, whereas institution backed projects typically have a much smaller pool of long-term engineers to rely on (JetBrains, for example, has something like 40 full-time engineers working on Kotlin/KotlinJS/Kotlin Native).
[0] https://github.com/DarkDimius [1] https://github.com/lampepfl/dotty
From my uneducated perspective, seems like Graal VM could become the de facto Ruby deployment stack.
If the communities doesn't like where things are going, they could fork the whole thing and call it something else, like Coffee.
Graal is EPL, GPLv2, LGPL licensed.
> Compatibility of the structure of AST nodes are not guaranteed.
Not sure if it means its going to be any more stable/complete than ruby_parser / ruby2ruby
Rubex - A Ruby-like language for writing Ruby C extensions.
Oh dear god.
But that doesn't mean you can't use a conventional compiler stack like LLVM as a JIT and get excellent code - it' just going to take its own sweet time doing so.
Can anyone think of any reasonably common stacks using LLVM as a JIT? There's mono, but that's a non-default mode; not sure if it's typically used. The python unladen-swallow experiment failed. Webkit had a short-lived FLT javascript optimization pass, but that was replaced by B3.
Which is just a long-winded way to suggest that LLVM is not likely to be ideal as a JIT, at least based on what past projects have done.
(Not trying to imply that writing C to disk is better, but it may well be simpler & more flexible - not worthless qualities for an initial implementation).
I know very little about ruby specifically but IME for this kind of dynamic language you get most of the initial gains by :
- removing (by analysis or speculation) dynamic dispatch
- unboxing / avoiding allocations in the easy cases
Once you've done that, you can generate pretty dumb assembly and still come out way ahead of your interpreter (and avoid very costly optimization / instruction selection / regalloc / scheduling).
Most of what llvm / gcc do only make sense when you've got your code down close to whatever you would actually write in C.
> The main purpose of this JIT release is to provide a chance to check if it works for your platform and to find out security risks before the 2.6 release
Performance is disappointing, though.
Care to elaborate?
> Unstable interfaces. An LLVM JIT is already used by Rubicon. A lot of efforts in preparation of code used by RTL insns (an environment)
https://github.com/vnmakarov/ruby/tree/rtl_mjit_branch#a-few...
The *nix philosophy has long been towards trying to provide choice wherever possible, so that people can use the tool that best meets their needs.
This isn't even remotely true. Crystal syntax looks like Ruby. Crystal's semantics (the bit that matters) are not like Ruby.
Oracle's plan for world domination via JVM is completely changing the performance landscape for dynamic languages.