This is a much better behavior than having random spikes in latency, which translates to a small but significative amount of users complaining about that your application is slow. Erlang is the contrary, every user should have the same experience, nonetheless of the load of the server.
Having it sleep for 100ms per request is a little strange for nodejs though because it is basically simulating cpu load, even though a node server typically hands off work to an optimized library that might be disk or network io bound instead of cpu bound. 100ms is a lot of cpu work per request, no? That is specifically the sort of task node is not recommended for, since it focuses on async.
Maybe I’m wrong, I need to work on larger projects to see what loads are more typical. At my last job, the entire focus was to use node to redirect work to c++ code, databases, etc, basically just request routing.
Still, this looks rough for node. 2x to 4x difference between it and the fastest is still a significant cost for such a nice backend to work with
https://gitlab.com/stressgrid/dummies/blob/b342b02407ce09cec...
This isn't busy wait. Instead it yields for specified time period, much like network request to the backend database would do.
Go's concurrency model really fits the task best though and it's easy to see that in the results.