But since many are comfortably being dragged into the Cloudflare vortex through their otherwise generously free offers, you'll find that the Cloudflare Worker CPU time limitation can turn into a huge waste of time, after the fact, once you realize the worker code you converted a few days ago and you're all joyful about suddenly starts failing a few days later.
Addendum: Just to illustrate the moment where you'll trip over it: here it casually mentions the default minimum being 30s, without being clear that this *only* applies to paid accounts. Only further down somewhere there's a tiny mention of 10ms! https://developers.cloudflare.com/workers/platform/limits/#c...
Here is the only other mention of it: https://developers.cloudflare.com/workers/platform/pricing/
So, if your script can get by with a max of 10 milliseconds of CPU time per invocation (not runtime), you'll be fine. You will, however, and this is crucial, only realize this a few days in. They're taking the average and eventually cap you and it stops responding.
This means you can’t physically set different permissions between prod and dev workers, which is a disaster waiting to happen.
(You can’t just make a second Cloudflare account for Prod, because it won’t let you bind single sign-on to two different accounts…)
It also means any employee in the company can just open a dev branch, print out the dev deploy key (from the Pipeline), and use it to deploy to prod. It’s currently impossible to block or mitigate.
Multi account support when you pay for enterprise.
[1]: https://astro.build
The workers execute from the same colos as the CDN, which are regionally distributed. They respond fast because they are physically close to the visitor and CloudFlare limits which runtimes they support to only very highly optimized ones.
And for my money, any platform that doesn’t require K8s is superior thank any which does.
Why don't they just offer "managed Postgres"? This is because their infrastructure is as homogenized as possible so does not offer hosting of arbitrary services or software, the only customizable code made available to customers are things like workers which are deliberately constrained (in execution time, resource usage, etc) to, again, allow them to keep all their infrastructure as homogenized as possible.
Most of their other products are to provide supplementary capabilities to workers.
For example, their durable objects are comparable (in terms of technical approach, problems they solve and trade-offs) to AWS's DynamoDB or Azure's Cosmos DB. These products are distributed by nature and work very well for certain kinds of projects and not so well for others. They're also fully in-line with the generally homogenous infrastructure that Cloudflare is engineered to work on.
In summary, Cloudflare has essentially homogenous infrastructure globally and is able to make their extensive edge infrastructure available to customers for customized applications by constraining it to "serverless" offerings. For customers that can work within the trade-offs of these serverless products, it's an appealing product.
Always wondering how its going for folks that are using Cloudflare Workers as their main infra?
Cloudflare Workers support WASM, which is how they support any runtime beyond JavaScript. Cloudflare Worker's support for WASM is subpar, which is reflected even in things like the Terraform provider. Support for service bindings such as KV does not depend on WASM or anything like that: you specify your Wrangler config and you're done. I wonder what you are doing to end up making a series of suboptimal decisions.
Most people won't care because the extent of their debugging skills is console.log, echo, print. repeat 5000 times.
https://tedspence.com/the-art-of-printf-debugging-7d5274d6af...
After that it doesn’t matter much which tool you use to verify assumptions.
I don't agree. The first thing any developer does when starting out a project is setting up their development environment, which includes being able to debug locally. Stdout is the absolute last option on the table, used when all else fails.
Cloudflare in general is a DX mess. Sometimes it's dashboard doesn't even work at all, and is peppered with error messages. Workers + Wrangler + it's tooling doesn't even manage to put together a usable or coherent change log, which makes it very hard to even track how and why their versioning scheme should be managed.
Cloudflare is a poster child of why product managers matter. They should study AWS.
I was thinking I might be able to hobble together a vibe-coded straightforward thing with Rust-> WASM to make an embeddable comment system, using Cloudflare Workers.
I gotta say that Workers are shockingly pleasant to use. I think I might end up using them for a bigger project.
You didn't even bothered to open the link, as it covers how the blogger vibecoded a couple of projects that convert existing projects built with different languages+frameworks to run on Cloudflare Workers.
And even those, have you ever tried anything beyond printf debugging? I bet not.
I also didn't want to have any kind of rich frontend layer, so all my HTML is generated on the backend. I don't even use complex templating libraries, I just have a few pure functions that return HTML strings. The only framework in use is Hono which just makes HTTP easier, although standard handlers that Cloudflare offers are just fine; it takes maybe 2-3 times more lines of code compared to Hono.
D1 is a fine database. Queues are fantastic for my purpose (cron-scheduled fetches of thousands of RSS feeds). Vector database is great, I generate embeddings for each fetched blog post and store them in the vector database, which allows me generate "related" posts and blogs. R2 is a simple S3-compatible object storage, though I don't have many files to store. Deployments and rollbacks are straight-forward, and the SQLite database even has time-travel when needed. (I've also tried Workflows instead of Queues, but found them unstable while in open beta; I haven't tried them after they became generally available.)
I know this might sound like an ad or something; I have nothing to do with Cloudflare. In fact, I couldn't even get through to the initial interview for a couple of their positions :/ It's just I always had this cloud over my head every time I needed to create and maintain a web project. Ruby on Rails + Heroku combo was probably the easiest in this regard, abstracting away most of the stuff I hate to deal with (infra, DB, deployment, etc.) But it was still not as robust and invisible, and also pricey (Heroku). Cloudflare workers is an abstraction that fits my mindset well: it's like HTTP-as-a-service. I just have to think in terms of HTTP requests and responses, while other building blocks are provided to me as built-in functions.
Minifeed has been chugging along for 2+ years now, with almost 100% uptime, while running millions of background jobs of various types of computing. And I didn't have to think of different services, workers, scaling and stuff. I am well aware of how vendor-locked in the project is at this point, but I haven't enjoyed web development before as much as I do now.
The only two big missing pieces for me are authentication/authorization and email. Cloudflare has an auth solution, but it's designed for enterprise I think. I just didn't get it and ended up implementing simple old-school "tokens in db + cookie". For email - they have announced the new feature, so I hope I can migrate away from Amazon SES and finally forget about the nightmare of logging into the AWS console (I have written step-by-step instruction notes for myself which feel like "how to use a TV" note for some old, technically-unsavvy person).