1. https://www.cloudflare.com/en-gb/products/cloudflare-images/....
Maybe the solution here is "just make sure the asset is cached on the edge" but for first access there has still got to be some impact no?
I'd love to see some test/benchmarks on access latency for stuff uploaded by say a colleague or app hosted in the EU or Asia with me in the US.
That being said, I put up a spec to let you provide hints. We won't necessarily honor it today in some cases and in the future we may ignore it altogether, but the thinking is you provide the hint & can retrieve which geographic region the bucket is in.
We also have latency improvements coming down the pipe.
Is the data replicated across geos?
Does all or a portion of it get cached at a local edge once requested?
Does it basically behave as if it was handled by CF cache?
These parts are really confusing for me right now.
Unable to write file at location: JrF3FnkA9W.webm. An exception occurred while uploading parts to a multipart upload. The following parts had errors: - Part 17: Error executing "UploadPart" on {URL}
with the message:
"Reduce your concurrent request rate for the same object."
Is this an issue on my end or CloudFlare's? I'm not doing anything aggressive, trying to upload 1 video at a time using Laravel's S3 filesystem driver. It works great on smaller files.
For now there are typically settings you can configure in whatever client you're using to lower the concurrency for uploads.
“Less egress” would essentially be the trick on wasabi where zero egress cost is the defining feature of R2. Of course, there must be limits to it but it is interesting.
Wonder if at this point teams start to consider different S3 providers for different weekends
No. It's basically one download of all your data in 1 month [1].
> For example, if you store 100 TB with Wasabi and download (egress) 100 TB or less within a monthly billing cycle, then your storage use case is a good fit for our policy. If your monthly downloads exceed 100 TB, then your use case is not a good fit.
In my experience it's an excellent fit for backups if you run disaster recovery tests quarterly on each set and have enough sets to run on a rotating, monthly schedule. You're only downloading about 25% per month at that point.
I think governments need to step in and require that compute platforms like AWS are split up into constituent parts, and there is no cost disadvantage to mix-n-matching between suppliers. Eg. VM's on Azure and storage across the road in AWS should not require payment of egress fees that wouldn't be payable within either providers network.
Some regulation requiring a common API and one click solution to transfer between providers would help solve this. Needs to be implemented intelligently though
I would much rather be able to explicitly choose this and know that customers data is where I told them it would be.
> ... we know that data locality is important for a good deal of compliance use cases. Jurisdictional restrictions will allow developers to set a jurisdiction like the ‘EU’ that would prevent data from leaving the jurisdiction.
It's coming: we understand the importance of data locality & residency requirements.
(I work at CF)
https://community.cloudflare.com/t/r2-per-bucket-token/41105...
The fact that you can't separate data for prod and dev with a product that's in GA now is kind of nuts.
Unless I'm missing something with how this fits in with Cloudflare's other services.
With R2 you can also use a bucket or few buckets per tenant whereas with S3 that's not possible (even if you have a fat ENT contract with them from what I've heard). We've extended the S3 spec to make it possible to list more than 1000 buckets [1]. Currently we still ask that if you need more than 1k buckets that you open a customer support request for us to discuss your use-case.
[1] https://developers.cloudflare.com/r2/data-access/s3-api/exte...