What sort of negotiated rates can you get from AWS for bandwidth I wonder, at the moment, that’s seems like the only real benefit from CF I think.
As an example I investigated, to put a custom domain in front of a B2 bucket they suggest using Cloudflare and CNAME-ing a bucket subdomain (eg f000.backblazeb2.com) https://www.backblaze.com/docs/cloud-storage-deliver-public-...
Well if f000.backblazeb2.com is used for any other people's buckets too, which appears to be the case, I guess I am now able to serve other people's files from my domain? This seems terrible.
> You must configure page rules to allow Cloudflare to fetch only your Backblaze B2 bucket from your domain. ... Otherwise, someone could use your domain to fetch content from another customer's public bucket. To ensure this does not happen, Cloudflare lets you use page rules to scope requests to your bucket.
Hetzner Storage Boxes (2.50-3 EUR per TB) is probably the sweet spot. B2 if you need an object storage API.
There are other ways to compete.
https://www.backblaze.com/cloud-storage/landing/ad/use-cases...
A $25/TB drive is not the only expense that $5 goes towards:
* there's actually probably 2 or more HDs holding that TB, since the business is promising that the data won't be lost
* theres the computer(s) that hold that HD.
* theres the electicity, bandwidth and space rental costs for those computers
* theres the cost of employees to make sure that the computers keep running.
* theres the cost of the marketing so that you know that the service is available
* theres all the book-keeping, taxes, cc fees, etc that need to be paid on the recurring charge
* there's (hopefully) profit for the investors/owners
and so on.
Also, on your side you should consider several of those factors yourself to do the comparrison:
* how much do you consider the time spent managing your hdds to be worth? (if you're a business this is employee-hrs, if you're talking about for yourself privately, there's still a value you should attach to your own time)
* do you have backups? If so, what does it cost to put them offsite? (In terms of space rental or favors traded, and your time)
* electricity, etc
* how much is it going to cost you to learn to reliably store your data (in terms of up-front cost, time spent, etc)
* and of course hard drive costs
My argument would be that this would be helpful with the high adaptation of things like Ring doorbells and other camera systems at home. Where people can store their own data and provides better security & privacy given you need not rely on a data connection to store that footage. It also would be extremely worthwhile if we are to see personalized LLMs become common and tools like home assistant. You wouldn't want that running off-site. In fact, I'd rather call home from my mobile LLM than call FAANG (or anyone else with teeth).
I just think buying used servers on ebay or trying to throw together a home rig is harder than it needs to be. I'm confident the demand exists but it is unfortunately a field of dreams scenario. Many people will not know they want it until it exists (I can say my parents would love this but they don't understand the first thing about technology so all they can do is complain about Google/Apple having all their data rather than express how they want to store their own).
How much do you think 1TB of storage should cost?
I was pretty surprised at the lack of dogfooding, wondered if it's an oversight, on somebody's Gantt, or just not something R2 can handle for some reason.
AWS has its own issues, but the push to have everything talking over API did wonders for the ability to use them as you want.
Sorry, could you please elaborate? Why can you not use a binding to an R2 bucket – and perform operations on its objects – in a `fetch()` handler of a worker? Or did I misunderstand this statement?
So... something isn't right here. Maybe a mechanical turk where a live human is fetching the object using Windows Explorer behind the scenes?
Obviously example above is contrived, but same principle applies to a pool of 1000 disks as it would 1. You also don't escape this issue with regular hot storage either, there is still a (((iops * replication count) / average traffic) / max latency) type problem lurking, which would still necessitate either limiting density or increasing redundancy according to expected IO rate. This is one reason why some S3 alternatives with weaker latency bounds (not naming names, they're great but it's just not the same service) can often be made substantially cheaper, and why at least one of S3's storage classes may be implemented entirely as an accounting trick with no data movement or hardware changes at all
The differences stack up for say, a 1GB video that becomes viral and triggers terabytes in egress. You pay for 1GB, not terabytes.
It’s also an optional tier.
Under the condition that you actively monitor the usage and manage to "process it once" on time (and then "process it back"). Because otherwise you pay for terabytes - not in egress fees, but in processing fees. Or am I missing something?
> "Data retrieval is charged per GB when data in the Infrequent Access storage class is retrieved and is what allows us to provide storage at a lower price. It reflects the additional computational resources required to fetch data from underlying storage optimized for less frequent access."
I like the "automatic storage classes" idea as well.
> "…you can define an object lifecycle policy to move data to Infrequent Access after a period of time goes by and you no longer need to access your data as often. In the future, we plan to automatically optimize storage classes for data so you can avoid manually creating rules and better adapt to changing data access patterns."
Magic Transit (bring your own ASN), classic website DDoS protection (above the Business $200 tier, which has low, undisclosed data limits in regions like New Zealand) and ilk all require interacting with the sales rep, and unless your paying 5 figures a month they are disinterested.
There is a whole market out there between $300 to $2000 a month that Cloudflare could tap without making new infrastructure but is actively being ignored.
They lock a lot of features behind an Enterprise plan where they could allow them to be added to a lower plan.
In general, I just hate working with sales reps and would rather avoid a company altogether if I can’t sign up without talking to them.
Can you please explain what this means?
Wanted to byt their SASE DLP & Remote Browser Isolation as a startup. Sales wouldn't even talk to us