> 1.27 million requests per second
> 3TB/minute of traffic
"rails doesn't scale"Each store can be assigned to one pod, each pod can have as many hosts as it takes to optimize the use of a database instance, and then you can add more pods as the need arises.
Edit: to be clear, that's not to say Rails can't scale. It can. It's just that it doesn't need to- you can scale anything with enough partitioning.
If rails were half as fast, you'd need twice as many rails hosts (but no more databases).
I'm sorry, but this is one of the silliest nit picks ive ever seen on this site. Of course 1 million rps and 3TB/s isn't coming from a single host. 3 TB/s is far beyond the throughput of any network I've ever seen, considered or thought of, short of maybe a data center (and I don't work in that domain). 1.3 million requests per second is far beyond the capacity of pretty much any hardware available right now.
Also what is the cost in man hours spent on optimizations and profiling.
Zero. Because Shopify would have waited until Rust came out in 2015, instead of launching in 2006, and they would never have gotten off the ground and been another failed techbro startup that instead of getting shit done, bikeshedded over languages.
PHP and Ruby apps have generated far more revenues than all the Rust and Golang code combined.
GP’s sarcastic “rails doesn’t scale” implies that it would also be a great choice for people starting afresh in 2023. The reply asks for a comparison with other languages popular in 2023, especially ones that are known for being more performant (lower memory and CPU consumption, lower latency).
And that’s when you’re dragging the conversation back to 2006. It’s not 2006 anymore.
> PHP and Ruby apps have generated far more revenues than all the Rust and Golang code combined.
You already stated the obvious: PHP and Ruby apps generated far more revenues simply by existing longer.
Partly, you need mobile now, so any Rails stuff is likely to be back-end and hidden. Plus big investors like to go for the exciting stuff.
But if you're looking at companies aren't household names (taking smaller amounts of investment), there are lots out there.
Syft (recruitment) were founded in 2016 and have revenues of over $100m per year - although that's partly due to acquisition by a larger competitor, so when I just looked, separating their valuation from the group wasn't immediately obvious.
I've freelanced and contracted across a few niche industries (construction, print, airport signage management!) where I was building something against competitor software that I discovered was at least partially built with Rails. Those big players in each niche would have revenue in the tens of millions and from what I could see, very small technical teams.
But anyone outside those industries would never have heard of these companies.
No exactly a fair question because Shopify is much older and is valued at 70B. For a company to have done it in half the time would have been impressive regardless of tech whereas on average it takes 7 years to become a unicorn.
I do know that Aircall is relatively young, on a good trajectory and runs Rails.
Even if a piece of software could only handle 1 request per second you could handle 1.27M requests if you just run 1.27M servers.
> Since Ruby 3.3.0-preview2 YJIT generates more code than Ruby 3.2.2 YJIT, this can result in YJIT having a higher memory overlead. We put a lot of effort into making metadata more space-efficient, but it still uses more memory than Ruby 3.2.2 YJIT.
I'm hoping/assuming the increased memory usage is trivial compared to the cpu-efficiency gains, but it would be nice to see some memory-overhead numbers as part of this analysis.
If your memory usage doesn't plateau you have a memory leak which would be caused by a bug in your code or a dependency.
But 500 to 1gb of memory required for a production rails app isn't unusual. Heroku knows this, which explains their bonkers pricing for 2gb of memory. They know where to stick the knife.
That is not correct. Ruby do unmap pages when it has too many free pages, and it obviously call `free` on memory it allocated once it doesn't use it.
What happens sometime though is that because of fragmentation you have many free slots but no free whole pages. That is one of the reason why GC compaction was implemented, but it's not enabled by default.
But in most case I've seen, the memory bloat of Ruby applications was caused by glibc malloc, and the solution was either to set MALLOC_ARENA_MAX or to switch to jemalloc.
That’s why good modern allocators like mimalloc and tcmalloc return memory when they notice it’s going unused, so that other services running on the machine can access resources. And this is in c++ land where things are even more perf sensitive.
Extremely bold claim for a framework the size of ruby on rails. I would trot out my own evidence but the receipts are lost with time.
Also—why isn't the allocation behavior tweakable at runtime? Seems pretty trivial with no downsides. It's not difficult to think of a scenario where a non-monotonically-increasing-heap-size is desirable.
Most businesses fail. Those that don't fail, usually don't have interesting scaling issues. (You can go a really long way on a boring monolith stack.)
So in most cases, whatever gets things out into the world and able to see if the business can be validated makes sense, and then you optimize later. A nonscalable stack that you can iterate on 50% faster is more likely to produce a viable company than a more scalable stack that's slower to work with.
If you're a hired employee, it's easy to forget that the place you're working for is already a big exception just by the virtue of it grew large enough to hire you.
Productivity and Scalability(in performance sense) aren't opposites.
Take Bash. Performs bad and is a guarantee for terrible productivity in a large category of software. But perfect for a niche. Take Java. Performs better than many, and allows for good productivity (if you avoid the enterprise architectures, but that goes for any language). Or take Rust. Productivity much higher than most C/C++ and in my case higher than with Ruby/Rails, and also much more performant.
Primarily it's not the language that makes people more or less productive, though it does have some influence. It's mostly the frameworks in those languages. And traditionally the most modern / full-featured web frameworks haven't been in systems languages. The major counterexample at the moment (while still obviously not a systems language) is that modern JS VMs are actually really fast, so while I don't love JS, it does hit that sweet spot at the moment of performance and mature frameworks.
Also, I've never worked in Rust, but am mostly a systems programmer, and while I understand that Rust is supposed to be easier than C or C++, I'm skeptical that it's as easy to work with as higher level languages, or that you could throw most web developers into Rust without some serious additional learning.
They often clash with each other. Rust for example is a lot less pleasant to debug than interpreted languages and that is a loss of productivity.
They don't need to do any of this. The product is fast enough. They make money. It's purely to fatten the bottom line.
What's the current state of Shopify running TruffleRuby, given the tragic loss of Chris Seaton?
Especially with native images, I wonder how that would turn out.
Rails proper, yes. Small rails app are generally drop-in compatible, but sizeable applications are likely to run in a few compatibility issues as most gems aren't tested against TruffleRuby.
> I wonder how much TruffleRuby would improve the performance and memory footprint.
The generally speaking Truffle is much faster at "peak" performance, but take very long to get there which makes it challenging to deploy.
It also uses way more memory, but it's partially offset by the fact that it doesn't have a GVL, so you get parallel execution with threads.
Ruby atm is working towards implementing true parallell execution with Ractors for example, and now with YJIT, the performance might increase some more.
It's good to see Ruby doing the same. There is something neat about the same code running faster, solely by being on an upgraded platform.
https://www.reddit.com/r/PHP/comments/16hu7dq/php_is_getting...
For 3.2 there also was an improvement of the interpreter:
> We now speed up railsbench by about 38% over the interpreter, but this is on top of the Ruby 3.2 interpreter, which is already faster than the interpreter from Ruby 3.1. According to the numbers gathered by Takashi, the cumulative improvement makes YJIT 57% faster than the Ruby 3.1.3 interpreter.
> All that work allowed us to speedup our storefront total web request time by 10% on average, which is including all the time the web server is blocked on IO, for example, waiting for data from the DB, which YJIT obviously can't make any faster.
My rule of thumbs:
Python has similar performance characteristics as Ruby.
With Java/C#/Go you’d expect about an order of magnitude of improvement.
With naive Rust/C++ you would likely be at the same average speed as Java for web applications but with less memory usage. Well until you make an effort to produce faster code.