It's a difference in run time purpose.
With a single code base capable of having millions of processes running as the norm, some handling direct client requests and others handling in-progress work, data storage, holding open connections for transfers, etc...you get the capability to deploy without disrupting ANY of that.
Most run times can't do anything close to that. Think about all of those X million websocket benchmarks...now think about being able to deploy without forcing all X million to try to reconnect at the same time.
And it can do this while all of the nodes are connected and communicating with each other as well as the outside world.
For standard issue client server, it's not that big of a deal. You just separate the web parts behind a load balancer.
For background workers, long lived connections, web sockets, video/audio streams, file transfers (CDN)...it's huge.