Does this pave the way for a “lite” version of the Dropbox client that _only_ syncs files and has none of the “added value” bloat that has crept in of late?
That was one of the reasons I cancelled my paid plan: https://taoofmac.com/space/blog/2020/06/21/1600
I lucked out and have 2 free plans that have bonus storage from various promotions. I get about 25 GB per account. I haven't maxed either one.
I absolutely love the product. My wife scans a file, I can grab it right away. I'm at work and need some document (e.g., my driver's license photo), I hop on the website and download it.
I pay $5 for backblaze to backup 5TB. I don't want to spend $10 a month for storage I'll never use (I couldn't even keep that much synced on most of my devices) but I'd gladly pay $3-5 a month for 50-100GB.
For now, I'll keep mooching with my free plan.
https://help.dropbox.com/accounts-billing/plans-upgrades/dro...
With Dropbox Family, each member of the plan has their own Dropbox account. A single person, the Family manager, will manage the billing and memberships for the entire Family plan.
1) Brotli (Broccoli) compression.
2) Differential updates through librsync.
3) "LANSync" a P2P sync within a broadcast domain (secured through server issued short lived TLS certs.)
That said, Desktop Client is only 1/3 of the overall Dropbox traffic -- the rest 2/3 are split between Web and API.
Does Dropbox still upload everything, even if the user has uploaded it before?
How's that work? Somehow modify the client to say that you have a file with a user-provided hash even though it doesn't actually exist on disk?
[1] https://techcommunity.microsoft.com/t5/office-365/onedrive-c...
Not going back to Dropbox yet though. I'd rather try out Google Drive since I consider it to be much better consumer plans.
And yes, much of the overhead stems from the RPC server that needs to be implemented. For lepton we used a raw TCP server (a simple fork/exec server) to answer compression requests. For Lepton we would establish a connection and send a raw file on the socket and await the compressed file on the same socket. A strict SECCOMP filter was used for lepton. It was nice to avoid this for broccoli since it was implemented in the safe subset of rust.
:-)
IME it's faster than brotli and often has a better compression ratio.
But the main reason we settled on Brotli was the second order context modeling, which makes a substantial difference in the final size of files stored on Dropbox (several percent on average as I recall, with some files getting much, much smaller). And for the storage of files, especially cold files, every percent improvement imparts a cost savings.
Also, widespread in-browser support of Brotli makes it possible for us to serve the dropbox files directly to browsers in the future (especially since they are concatenatable). Zstd browser support isn't at the same level today.
This advanced feature is only relevant on reaching compression levels 10 or 11, which are extremely slow. Below that, it's barely used by the encoder, due to memory and cpu taxes.
Given your application has reached speed concerns, and ends up using brotli at compression level 1 in production, you would be surprised to notice that in this speed range, zstd compresses both faster and stronger, by a quite substantial margin.
> Pre-coding: Since most of the data residing in our persistent store, Magic Pocket, has already been Brotli compressed using Broccoli, we can avoid recompression on the download path of the block download protocol. These pre-coded Brotli files have a latency advantage, since they can be delivered directly to clients, and a size advantage, since Magic Pocket contains Brotli codings optimized with a higher compression quality level.
There is also a format-agnostic and adaptable heuristic to stop compression if the initial part (say, first 1MB) of the file seems incompressible. I'm not sure whether this is widespread, but I've seen at least one software doing that and it worked well. This can be combined with other kinds of heuristics like entropy estimation.
I never realized the advantages of brotli over zlib could be so extensive, in particular, it appears they're getting a huge speed boost (I think also in part that its written in Rust)
>we were able to compress a file at 3x the rate of vanilla Google Brotli using multiple cores to compress the file and then concatenating each chunk.
Side note: I admit, at first I thought they were talking the Broccoli build system[0]
Various compression enhancements, including the addition of zstd and lz4 compression algorithms and a negotiation heuristic that picks the best compression option supported by both sides.
Just kidding :) great article. As others have said, supporting data was very informative.