No, I'm not joking. We used to allow arbitrary paths in a cloud API I owned. Within about a month someone had figured out that the cost to store a single byte file was effectively zero, and they could encode arbitrary files into the paths of those things. It wasn't too long before there was a library to do it on Github. We had to put limits on it because otherwise people would store their data in the path, not the file.
Reason - to not overcomplicate or give appearance of nickel-and-diming
The sync application itself can handle this using openat(2) or similar and should probably be using that regardless to avoid races.
Point taken, although I still think it's better for cloud storage services to err on the side of compatibility, i.e. what's the lowest common denominator between Linux, macOS, Android, iOS from 10 years ago and Windows 7?
Avoid arbitrary limits on the length or number of any data structure, including filenames, lines, files, and symbols, by allocating all data structures dynamically.
I assume they're relying on the OOM Killer and quotas to prevent DoSes all over the place.