How dare you belittle the new Super Ultra Nitro Deluxe Gold Platinum emoji, stickers, and playable sound effects.
Are there any benefits/features to using their Electron desktop app?
> the compression time per byte of data is significantly lower for zstandard streaming than zlib, with zlib taking around 100 microseconds per byte and zstandard taking 45 microseconds
Either way the incremental improvements here are great - and it's important to consider optimization both from transport level (compression, encoding) and also from a protocol level (the messages actually sent over the wire.)
Also one thing not mentioned is client side decompression on desktop used to use a JS implementation of zlib (pako) to a native implementation, that's exposed to the client via napi bindings.
But we don't use any of the usual cloud offerings, only smaller local companies.
Time to compress is a measure of how long the CPU spends compressing. So this is in the blogpost
Client-side compute may sound like a contrived issue, but Discord runs on a wide variety of devices. Many of these devices are not necessarily the latest flagship smartphones, or a computer with a recent CPU.
I am going to guess that zstd decompression is roughly as expensive as zlib, since (de)compression time was a motivating factor in the development of zstd. Also the reason to prefer zstd over xz, despite the latter providing better compression efficiency.
> Looking once again at MESSAGE_CREATE, the compression time per byte of data is significantly lower for zstandard streaming than zlib, with zlib taking around 100 microseconds per byte and zstandard taking 45 microseconds.
I was surprised to see little server-side CPU benchmarking too, though. While I'd expect overall client timing for (transfer + decompress) to be improved dramatically unless the user was on a ridiculously fast network connection, I can't imagine server load not being affected in a meaningful way.
Also would be interested in benchmarks between Zstandard vs. LZ4 for this use case - for a very different use case (streaming overlay/HUD data for drones), I ended up using LZ4 with dictionaries produced by the Zstd dictionary tool. LZ4 produced similar compression at substantially higher speed, at least on the old ARM-with-NEON processor I was targeting.
I guess it's not totally wild but it's a bit surprising that common bootstrapping responses (READY) were 2+MB, as well.
Protos or a custom wire protocol would be far better suited to the task.
IIRC it’s used in the desktop client and some community libraries (specifically JDA) have support for it.
There were some quirks regarding ETF usage with Discord’s Gateway but I can’t recall at the moment.
> the metrics that guided us during the [zstd experiment] revealed a surprising behavior
This feels so backwards. I'm glad that they addressed this low-hanging fruit, but I wonder why they didn't do this metrics analysis from the start, instead of during the zstd experiment.
I also wonder why they didn't just send deltas from the get-go. If PASSIVE_UPDATE_V1 was initially implemented "as a means to scale Discord servers to hundreds of thousands of users", why was this obvious optimization missed?
[1]: https://en.wikipedia.org/wiki/Oracle_attack [2]: https://en.wikipedia.org/wiki/BREACH
2024-09-20T13:28:42.946055-07:00 hostname kernel: audit: type=1400 audit(1726864122.944:11828880): apparmor="DENIED" operation="ptrace" class="ptrace" profile="snap.discord.discord" pid=1055465 comm="Utils" requested_mask="read" denied_mask="read" peer="unconfined"
And my computer isn't particularly crazy. Maybe like $1500.
But on a more serious note, I still hate bloated software. Money can't buy latency and the sluggishness gets really annoying.
Most of the time I have seen people complain about this it is because they have joined a ton of hyperactive servers.
You could argue it shouldn't be an issue and more dynamically load things like messages on servers. But then you'd have people complaining that switching servers takes so long.
Yes.
>Also relevant, do you need to be in all of them?
Yes.
You must be new here, because if you aren't connected to dozens of servers and idling in hundreds of channels (you only speak in maybe two or three of them) you aren't IRCing right.
What? I'm a confused old clod because we're talking about Discord in the year of our lord 2024? Same thing, it's a massive textual chat network based on a server-channel hub-spoke architecture at its core.
What is actually worth our time asking is why we could do all that and more with no problems in the 80s and 90s using hardware a thousandth or less as powerful as what we have today.