https://blog.cloudflare.com/aws-egregious-egress/
https://www.cloudflare.com/press-releases/2021/cloudflare-an...
For an honest bandwidth offer, I'd much sooner consider Fly.io rather than Cloudflare. At least with Fly their true pricing is transparent
edit: speaking of transparency, https://imgur.com/HnlWFUe
Granted Cloudflare, the CDN, has Enterprise plans for higher TB bandwidth (esp video), but Cloudflare, the Cloud platform, has more than generous free-tier, batteries included. AWS' value-based pricing has them extract fees for things as trivial as builds and deploys, and their bills are nothing but nightmare to parse or estimate. This is in stark contrast to the simple and straight-forward pricing with Cloudflare, which we pretty much prefer as a small dev shop. So much so that we choose to pay Cloudflare money to host our services even though we've got 5-digit AWS$ credits.
I'd always prefer paying for certainty than design a solution built on a lottery.
That's huge, Could you explain a bit more on how you achieve that? I've been exploring Cloudflare cache, AFAIK for anything not cached by default you need a special page rule e.g. for .html, I tried a wildcard domain.com/* and it doesn't seem to make any difference. Are you using workers to cache specific file types?
Even within just the CDN category, Cloudflare does a lot with its basic CDN offering (bot blocking, transformations, dynamic caching, etc.) that other vendors may charge as separate options.
That's not to say I don't believe you, I just wanted to make sure it's apples to apples.
Cloudflare's pricing model only really makes sense, IMO, if you're either a small/med business (which makes it an incredible deal) or if you're working cloud-native using their edge functions (workers, pages, etc.). If you're primarily using them to shield and proxy a LAMP monolith or similar for a large number of users, yeah, I can see how that would get expensive really quickly. There are other vendors who specialize only in that, being a dumb CDN. Cloudflare's value is that they enable completely new architectures based on their network topology... you can't easily do something like that on Fastly, for example.
On the other hand, $0.0021/GB is far on the cheaper end of the spectrum, who is it that offers something that low?
Agreed all around that the pricing is frustratingly opaque.
fly.io looks interesting, though, never heard of them.
It's a test account, I just want it to shut down when the limit is reached.
However, I also think a big problem is that many people on the internet and especially people who try to sell AWS tutorials or learning courses push AWS as some toy that every developer should sign up for on a whim without understanding what they are doing. An AWS account is an industrial-grade tool, it's not a toy, and it should be treated as such. It's like renting a backhoe when you don't even know how to use a shovel yet, and then being surprised when you completely screw up your yard.
Sites like acloudguru that offer ephemeral sandbox AWS accounts are becoming more popular, and people new to AWS should really be steered towards those.
So you get an email saying your $10/mo site is now $1000 for this month, and climbing.
In general I wouldn't recommend using AWS and expecting the free tier for anything that's going to be public facing or autoscaling.
It's not a solution at all given the fact the alerting process can lag behind the logging process by several hours or more. If you've hit a traffic spike it could have rolled over your site and gone in that time, leaving you with a bug bill.
Alerts are not a viable solution to traffic spikes unless they're real-time and absolutely bulletproof. AWS's alerting is neither.
https://aws.amazon.com/blogs/aws-cloud-financial-management/...
For the billing system to then "turn off" X it needs a number of things.
1. It needs the ability to reach back out to that service. It probably has no idea what it is, all the billing system is likely to receive is something like:
{service_name: "X", action: "Put"}
ie: Pretty opaque data with just enough structure to know "This costs X cents and happened Y times".So now your billing system needs to be able to resolve "X" back to some AWS resource that it can talk to. Both the resolution of X as well as the "billing can now talk to every single AWS service" are pretty heavy lifts.
2. It needs to know what "off" is. "Off" for S3 could mean a lot of things.
a) Delete the bucket and all data inside of it
b) Keep the bucket but delete all data inside of it
c) Keep the bucket and the data but disable API access
etc etc. Do you disable PUT? GET? Both? What if what's blowing up your billing is GET?
And this really doesn't get easier for other systems. Do you back up a database before killing it? That incurs charges too.
I don't see AWS somehow solving this in a "one size fits all" way because there isn't one.
The complaint here is that Amazon offers a free tier supposedly for learning the platform, but it is a giant footgun that shoots a ton of people in the foot.
People are reasonably asking for hard limits to protect them from this highly foreseeable situation wherein a complicated cloud offering can go on a spending runaway.
It is literally as easy as following a beginner tutorial and selecting the database instance the tutorial uses and leaving it running. That could be a several hundred dollar mistake.
So yes, it is shady not to protect customers from that IMO.
BTW - I've never heard of the later happening at AWS ever, and I have at other hosting providers.
It'd be a huge engineering effort to make something instantaneous--I think the closest thing they have to such a system is whatever they use for rate limiting or IAM.
I'm guessing there's a pretty high overhead to trying to do realtime instead of batching
AWS oopsies suck but I think their billing system is pretty robust compared to lots of usage based billing systems (like, say, utilities)
it's pretty reasonable for them to ask for a CC -- making it too easy to get free compute/bandwidth is opening the door wide for abuse.
..but yeah, everyone wishes they'd have a sane way to halt services if over budget.
But I still don't run anything important on it or push the limits of the free tier. Oracle doesn't have a good reputation. Also "Oracle Unbreakable Linux" is literally just RHEL rebranded, but it's not a community project and they don't like to acknowledge it so it feels particularly shameless; especially since they are selling "support" for it.
but yeah, more competition is always good.
See also: Bryan Cantrill's "lawnmower" talk about what happened to Solaris after the Oracle acquisition. https://youtu.be/-zRN7XLCRhc
[1]: https://old.reddit.com/r/sysadmin/comments/d1ttzp/oracle_is_...
[2]: https://upperedge.com/oracle/top-3-reasons-oracle-java-users...
You basically get stuck paying lots for Oracle software and that doesn't directly equate to value the software provides
I was at a point where I just said "screw this, I don't care about the free trial, just give me an account where I'm supposed to pay for everything". For some reason you have to go through the free trial in order to get a regular account.
I didn't even try out the platform once I got my account. They managed to drain all my energy in the sign-up process.
I'd rather go bankrupt by AWS/GCP nefarious egress fees than having to deal with Oracle again. Serves me right for giving them a shot.
I second the recommendation to stay as far away from Oracle as you can, even if their OCI pricing seems incredible.
Providers always have a false positive rate on signup experiences and last time I looked Oracle had provided a way to contact them and say "You got it wrong" (and they are reasonably good at fixing those situations).
Maybe the idea is that if you're doing 1+ TB of CloudFront traffic, you're already deeply locked into AWS anyways and less willing to make the jump anyways.
The cost savings is equivalent to ~$9/mo in standard US regions. Nobody is going to migrate clouds over a $9/mo saving.
If we're talking about CloudFront that comes out to $85. That's actually a pretty good savings, but it's distinct from egress. CloudFront's pricing isn't locking customers in due to egress pricing because it's a CDN, not a data storage service.
With AWS, you can do it, via a trigger on a spend notification and a script, but the whole thing is a giant kludge. It should be a default feature. It should be a default feature for all cloud providers.
I'm literally using them less because this isn't a feature they offer. Even the free tier of AWS is too risky without hard limits.
The unfortunate reality is that hobby developers that just want to pay $20/month aren't the target audience for GCP etc. They don't really care if you're using them less for your personal hobby projects. They target large enterprises, and those large enterprises would have very little use for something like "cap my spend at $20".
Even as an AWS employee, I sometimes use non-AWS hosting providers for my own projects. Even outside of the billing situation, AWS is often too complicated for my use cases. It's just not targeted at me and my hobby development projects.
disclaimer: am AWS employee but the above is my own opinion and not official position of the company, etc etc.
For the enterprise as a whole, probably not. But it would still be useful to be able to create sandbox accounts for experimentation with a hard limit on spend. Or give developers their own cloud accounts to run development and testing infrastructure that they can control, without having to worry about them accidentally spending way too much.
(I should note that I haven't used them myself, I'm just impressed by their website/business model.)
It still baffles me that this isn't the default. Then again, egress/ingress costs also baffle me, why not just offer capped speed and unlimited bandwidth?
I doubt that i'll ever willingly use a platform that i pay for myself, which can just decide to charge me bunches of money because of my site/app getting DDoS'ed (assuming not all components are fully protected at the edge or something) or suddenly gaining popularity.
In my eyes, such scalability should be opt in (or at least opt out) and by default my apps should just break under the load, much like any self-hosted software would, which may be preferable in some circumstances (e.g. API manages backpressure as best as it can, once request queues fill up, it just gives "503 Service Unavailable" with something like "Retry-After" and any clients/front ends are supposed to show messages to try again later, or automatically retry after a while).
To that end, here's a list of providers that give you such fixed resources (in this case VPSes) and that i personally have used in the past:
• Time4VPS: https://www.time4vps.com/?affid=5294 (currently hosts most of my sites, hence affiliate link)
• Contabo: https://contabo.com/en/
• Hetzner: https://www.hetzner.com/cloud
• Scaleway: https://www.scaleway.com/en/elements/
• Vultr: https://www.vultr.com/products/cloud-compute/
• DigitalOcean: https://www.digitalocean.com/products/droplets/
Of course, some of those also offer managed services, but in my eyes it's far safer to just run MySQL/MariaDB/PostgreSQL/MongoDB/Redis and any other piece of software in Docker (or OCI compatible) containers, as opposed to using something proprietary, even with protocol compatibility. That way, it's also possible to migrate between providers easily, should the need arise.
This doesn't feel to me like a generous enough statement about all of the finer points.
Would i be okay with my servers no longer responding to web requests (or doing so with reduced port speeds with some of the above hosts, e.g. Time4VPS) in case of spending limits being unexpectedly hit? Yes, definitely, since my projects going dark for a bit or slowing down would be preferable to me ending up not being able to pay my rent and having to rely on public outcry about the large bill and the vendor's mood that day towards letting that one slide.
Would i be okay with my servers running out of space and no longer writing new data to any database, merely responding with the appropriate errors instead? Yes, definitely, because that probably signals the fact that for some reason lots of space has suddenly started getting used for no reason, something that happened so fast that i didn't even get to plan the appropriate scaling, once Zabbix would warn me about 80% of the disk being full (or any monitoring tool would have an alert set up for this). This would probably also be yet another sanity check.
Would i be okay with my servers suddenly ceasing to exist and being wiped from existence, for any reason short of excessive abuse complaints or repeated ToS violations? I most certainly wouldn't want this to happen, yet in my experience it has never actually happened - some of the vendors listed allow you to pre pay for the resources you're about to use (e.g. Time4VPS and Contabo), even offering discounts for longer term reservations much like AWS does, without unexpected charges to you like AWS would do. In contrast, some of the other vendors listed allow you to pay based on hourly usage (Hetzner, Scaleway, Vultr, DigitalOcean, at least IIRC), but also won't have unpredictable pricing spikes because of set limits for the most part (ingress/egress charges may still apply, however, a worrying trend in the industry that muddies the waters), if i pay for a 5$ VPS every month, that's what i can expect to pay most months regardless of usage (assuming 100% uptime).
With these factors in mind, my servers being wiped would probably have to happen due to either me misusing those services, or alternatively me failing to pay the bills even with the more predictable pricing structures, which is actually very much like a person's electricity being turned off due to them not paying for it - as unpleasant as that'd be, no surprises there. Personally, i don't subscribe to the belief that managed services with unpredictable billing are the only way to do software nowadays, something that Richard Stallman coined as SaaSS (Services as a Software Substitute) and about which you can read more here: https://www.gnu.org/philosophy/who-does-that-server-really-s...
Alas, as long VPS uptime with predefined resources is the unit of computation that you're paying for, everything else should be fairly simple from there onwards.
Probably easiest to generate a virtual card number with a spend limit, off the top of my head I know Capital One offers this, Apple will also generate virtual card numbers though I don't know if you can set a spend limit on them.
Imo hard spend limit is a pretty complicated thing to implement and billing has historically been very batch based
I find most apps are pretty binary. Either they are high bandwidth (like video and backups) or not. If you use 1TB there is a good chance you will use 2TB.
This will certainly make it less likely for the low bandwidth users to get a surprise bill but if you're doing video $85 per TB will get expensive fast. You better make sure your business model can make a profit at those prices. But even then, if the average user pays you $20 a month and only streams 100GB of video you make money but you'd make more money outside of AWS.
It's almost as if AWS wants to active discourage those use cases.
I realize business need to make a profit but I don't think AWS is cross-subsidizing. It seems more like there are no loss leaders, they make at least 60% margin on all products (for users not big enough to negotiate rates). They could lower their outbound prices significantly, they just choose not to until the market forces them.
Which is smart business. But I think given Cloud Flare's pressure here, the market is calling for a more aggressive adjustment than just upping the free tier.
[1] https://www.cnbc.com/2021/09/05/how-amazon-web-services-make...
50,000 x 250k = 12.5B
Also, they do cross-subsidize, because many AWS services are either hardly used (CodePipeline) or free (CloudFormation) and the cost to run those services is non-negligible.
Although their pricing [1] after the first 1TB is still very expensive.
Who would pay 100$ for backup test or recovery with S3?
This only applies to the free tier, which you age out of after a year. Who even cares? Or am I misreading this?
They can do a lot better than this.
edit: It appears that the regional transfer doesn't age out either, even though it isn't explicitly stated in this post (whereas they did state so for Cloudfront).
Even now you should be getting 1 GB of free out traffic on your older-than-12-months AWS account.
0: https://aws.amazon.com/free/
Is it 100 GB per region or 100 GB across regions? Because it sounds like the latter.
But,
This is THE textbook example on how competition pushes innovation forward and drives prices down.
Thank you, Cloudflare!
Is this similar to cloudflare workers?