Also since this is a beta product, I am doubting that you would hit too many problems with any deployment solution at this point. Why are you using Aurora + serverless + event bridge at all? Its magical because you are all using the same dependencies as prod, but why are you using these dependencies at all? All you need is a single postgres database (easily spun up on any developer machine) + crud app (easily able to use full stack typescript with nextjs or the like).
Word of caution about having this many moving parts, they will bite you in the ass. Now it seems all rainbows and magic but there is a reason people on this website preach simplicity -- because when shit hits the fan the simple solutions are easy to debug and fix without spending 1000s of dev hours. The complicated magic solutions are the ones that cause 24+hr downtimes and need to be re-architected. You are a customer service tool, not a devops company, the choices here don't make too much sense.
Dev environments in traditional shops are usually a PITA and only really resemble production. We’re all just used to it. This model is different but once you’re used to it, must be quite a joy. The main drawback seems to be developing against real services (rather than emulated) is slower, but that seems like a worthwhile trade off IMO.
The serverless trend breaks this a lot by making the production environment cloud-only and not something that you can run offline.
And those systems have the entire Builder Tools org making sure they work.
At least their solution architects get some moral redemption from peddling these messes..
One issue (that he mentions) is that not everything is mockable in the AWS (or any other cloud envmt) - this has some repercussions for TDD.
So much vendor lock-in, so much dependance on internet connectivity, so many buzzwords around... so much magic (as the post's title describes). I really dislike having to deal with a lot of third-party magic when doing programming.
Serverless Stack framework is not bread and butter as they describe it, it often locks up and CPU going to 100% on my machine, misses debug breakpoints, breaks with weird error messages pointing to compiled JS code that are hard to trace back to TypeScript, etc, etc. And you are right, you are locked in, there is too much magic, you are dependent on internet connection.
Maybe I just inherited shitty serverless project, but it is painful.
That said though, we often work with teams in our Slack to get their setup working well. If you haven't, I'd recommend popping in and posting about it. Or just send me an email: jay@serverless-stack.com and we'll figure it out.
I'm currently working on using Terraform in Az DevOps Pipelines and Releases, to stand up an instance of something that has multiple 'cloud object' requirements. The Release process is frustrating, because the default assumptions made by Azure's tooling want me to format my variables a certain way. But, it would probably be like 'magic' if I just caved in and reworked our variables to be more Azure-friendly naming style.
Why should the tool have influence over the product? I know it's taking me more time to get it to work 'my' way, but I feel like I should force the system to adapt to my needs, instead of the other way around.
Throwing my two cents into the conversation, perhaps one day we’ll be able to have our cake and eat it too: If http://localstack.cloud (replicate AWS services locally) ever makes it to a 1.0 release, I may strongly think about vendor lock-in friendly approach like this. Unfortunately localstack recently (v0.14.1) forced me to give up on each ambition in my ideal develop-n-test-it-locally wishlist: first I gave up on a CDK provisioning resources locally, then I abandoned using ECR images for localstack lambda, and then I abandoned localstack altogether. Their S3 mock still works for local dev though!
The other thing is... if you want to develop locally but still on top of lambda, you either skip the api gateway altogether, or you write an undeployed shim for like expressJS or node http to fake that restful pattern locally. As great as not using APIGW+lambda-integration is, Lambda doesn’t believe in return codes other than 200 (ignoring throws and server failures), and you should probably use the AWS-sdk/client-lambda package. This all scatters weird lambda anti patterns. Yuck, that’s gonna be a big “chore:” commit if you ever get sufficiently sick of it. And I’m actually not in favor of using the lambda response interface, but APIGW lambda integrations are a seeming sht show. VTL and selection pattern seem so bad I’d rather just have more Kubernetes experience. Knative is good these days, I hear.
Using SAM Local [1] to emulate API GW + Lambda and DynamoDB Local [2]. For the rest we have flags we can switch to either call the real deal on a personal AWS account, or mock responses locally.
I definitely like the idea of using AWS services as much as possible, unfortunately each dev getting their own account was shut down by my current employer.
For local development, we continued using Django normally inside its own docker container with matching dependency versions and like.
I write a conventional 12 factor app, with no vendor-specific code which could be executed on my local machine for development, Heroku, or anywhere else, and hand it over to Google Cloud Run. It's really an amazing service.
I had a few wishes on their service (eg integration with Papertrail, an easy way to run background workers, etc) but overall the whole thing is the best of all worlds:
* No vendor lock in due to platform specific code
* Easy local development
* Serverless scalability and pricing
Are there particular reasons why people go through so many hoops to use Lambda when such superior experience exists?
I remember once there was a whole discussion point when people decided it was no longer possible for a single human to learn "all of Windows". I feel like we are at that point with cloud services ... it is now beyond the realm of possibility for a mortal human to obtain a comprehensive knowledge of cloud computing.
Fly.io - https://fly.io/ (gives you a Postgres DB too)
Digital Ocean App Platform - https://www.digitalocean.com/products/app-platform
Scaleway Compute Containers - https://www.scaleway.com/en/docs/compute/containers/quicksta...
That said, I spent time going into elastic beanstalk and ended up a bit disappointed, amazon doesn't close projects, but they tend to launch a lot of stuff but not always keep going with them fully.
Do you mean that you could write a regular Flask/PHP app and it will automatically make it serverless? What about long running tasks that gets triggered by http?
To answer your question, its just a lot easier to use a serverless framework to fully utilize separate AWS services (authentication, message streaming, database) while with the solution you described would benefit from custom platform depenedent binary applications (getting ffmpeg/specialized version of PIL to work on aws lambda was a nightmare).
Hopefully somebody else can chip in here, I've never used Fargate or Cloud run but neverthless open to when i can use it.
It is close to Fargate. But on Fargate there's still a lot you have to manually control, like when to spawn up new instances, etc.
Cloud Run has this extremely simply concurrency model: You tell how many requests your app can handle concurrently and how many instances are allowed to run any given time.
* Do you mean that you could write a regular Flask/PHP app and it will automatically make it serverless? What about long running tasks that gets triggered by http? *
It's Docker based. So it doesn't really care. It just spawns up your docker file and expects your app to listen on $PORT.
Are there many other good contenders in this space?
We also do a lot of “serverless” but the way we do it seems far less vendor dependent than this. Basically what we do is layer out the “node” part of our “serverless” application and let our cloud provide act as what is essentially the role of what expressJS or similar might have done for you 5-10 years ago. We’re also handling a lot the federated security through a combination of ORMs, OPA, AD and direct DB access control for the very rare BI application that needs it.
This way we can leave our cloud vendor at any time. Not without figuring out a suitable replacement but far easier than this article, and maintaining almost all of its advantages.
Interesting read, and if you’re certain AWS is a good home for the next five years from your most recently deployed service, then I don’t see too much of an issue with vendor lock-in.
but with typescript are you not ending up writing more code? curious to know more about this and are you seeing tangible benefit from not having to switch between backend and frontend? im somehow skeptical of this and im always curious to hear from others who might prove otherwise.
we are using python on the backend (aws chalice) and just react javascript. don't really find much issue switching back and forth. and post 3.5 there is now runtime type hinting too so I didn't really see the appeal of using typescript.
now if there was only a way to write frontend applications in python that would be a dream :)
> We also do a lot of “serverless” but the way we do it seems far less vendor dependent than this. Basically what we do is layer out the “node” part of our “serverless” application and let our cloud provide act as what is essentially the role of what expressJS or similar might have done for you 5-10 years ago. We’re also handling a lot the federated security through a combination of ORMs, OPA, AD and direct DB access control for the very rare BI application that needs it.
hmmm I guess we are locked to AWS sort of (since we use AWS chalice) but I mean it could easily be ported to a python Flask server since the syntax is almost identical and we could absolutely run this on a VPS server if we choose to.
I think that depends on who you are and where you work. I actually come from a decades worth of C# and Python experience before starting at this place. I even wrote my first five “serverless” services in Python here, before switching to Typescript around December 2021. It wasn’t love at first sight, let me tell you that, but the control Typescript gives you once you decide upon some standards and enforce them, just makes things easier to manage. Basically what we do is write functional services that utilise interfaces (bad name) as a form or data-object Types and it sort of gives you this magical environment that has the best of both Python and C# mixed together.
You could probably achieve the same with Python, but we couldn’t. ;)
As far as speed, I’m talking about how we can reuse our internal packages. I wrote a NPM package to handle Odata API calls with the ability to add generics so that you can auto-complete anything in the query. I wrote it for our react clients (we use typescript for those) and then when I was writing a “serverless” function to do some heavy lifting in C# or Python it annoyed me that I couldn’t just consume the same library/package. Which is basically how I was converted to Typescript. I mean, there were other compelling arguments, but that’s what really did me in.
I’m not a fanatic though. I’m sure I’ll write more C# and Python and probably something else when it’s going to make sense to do so. But our Python “serverless” applications are frankly build the same way our Node applications are. It’s a little different with C# because the thing I haven’t yet named is Azure, but ideally, they too should be decoupled.
Yes, you type a little more, and your code is easier to refactor and change in the mid/long term. So you save time, and sanity.
Not mentioning the cost of using a slower runtime with Python... I've seen serverless bills of 100k+ a month with NodeJS. If something twice as slow was used, and Python is more than twice as slow on average, then we'd be talking 200k+...
I’d lean towards either something dynamic and quicker to develop with, like Elixir, or else use Rust for the parts where stronger compiler guarantees are absolutely vital.
I see TS just like CoffeeScript.
Doing justice to the advantages TS provides would take a proper blog post though.
In most other cases, SAM or direct cloud formation is usually sufficient.
Our application has evolved from a simple API Gateway + frontend architecture to numerous asynchronous processes, dozens of integrated Lambdas, numerous S3 buckets and DynamoDB tables, third-party system integrations, cross-account integrations, etc.
It's been great for our velocity, costs next to nothing to run in development, and has been generally straightforward to refactor any "monolithic" parts into smaller, reusable, event-driven units. We operate in a very traditional industry and interface with much older style enterprise systems.
I wouldn't say any part of it is truly "magical", but the ability to give each developer their own fresh copy of the environment for each new piece of work at little to no cost is incredible. It's wonderful not having to concern ourselves with scaling each of our resources or worrying that they'll go down, as well as not needing the regular maintenance that comes with servers or containers.
I've been experimenting with serverless for some time now and came to many of the same conclusions written about here. The biggest takeaway for me is that there are pitfalls to an overreliance on lambdas. You really need to offload as much as you can to the other serverless solutions AWS provides.
I've been using Appsync for my graphql API instead of API gateway + Lambda, and I have had a good experience with it. A lot of logic can be offloaded into mapping templates, making lambdas unnecessary in many cases. Debugging is still a bit of a pain for me, but the end result is fast, cheap, and reliable.
I'd be curious what kind of use-cases people have that makes this complexity worth it.
If you're not using managed services, i.e running your own database, backups and all - you might as well use bare-metal.
The only places I've worked at/for as of the last 2-3 years that don't heavily rely on managed services were an absolute disaster. A lot of "not invented here" syndrome. A lot of anti-patterns and janky hacks to make things work.
Currently working somewhere 100% locked in to AWS and I've never been happier.
That said, RDS is great. Managed, automated backups and portable.
This is super interesting. I'm almost certain they mean AWS keys (with appropriate IAM permissions etc.,). Otherwise billing, developer onboarding/de-boarding and such quickly go out of control.
Back in 2015 at my previous company I had setup two AWS accounts, one for test stack and one for production. That in itself was quite a bit of pain.
To manage our AWS accounts we use:
- AWS SSO hooked up to our Google Workspace: so no AWS access keys exist, everyone has only short-lived credentials (e.g. 24 hours) to access their AWS account.
- AWS Organization with consolidated billing: all our bills roll up into one nice invoice!
- AWS Control Tower: allows us to deploy guardrails and policies to keep all our AWS accounts secure. We also have a centralized Audit AWS account where all Cloudtrail logs are routed.
- AWS Account Factory: to create new AWS accounts that are automatically enrolled and created as part of the right Org Unit.
- AWS Cloudformation StackSets: allows us to deploy custom resources to everyone's AWS accounts. Right now we use this to deploy custom roles that can be assumed by developers.
Hope that answers your question!
With separat IAM users you can still hurt each other as you share the same space for all the services, unless you make some pretty advanced access control rules I guess.
[1] https://aws.amazon.com/blogs/opensource/managing-aws-organiz...
https://github.com/serverless-stack/serverless-stack
Given their most obvious competitor is already ambiguously named and extremely popular:
If your architecture follows domain driven design principles, you can build very large, complex, scalable, and maintainable systems.
Yes, you hit the nail on the head :) 99% of our code actually lives in domain packages that are infrastructure agnostic. Our lambda handlers are very lightweight adapters that take the input and transform it to a domain input and transform the output back.
That said, our serverless architecture does leak a bit into our domain thinking. E.g. certain things are built async since it's just so easy to hook up a lambda to listen to an EventBridge event.
Note how their docs don't give any technicals as to _how_ this tech works[1].
1. https://docs.microsoft.com/en-us/azure/azure-functions/funct...
The OP is actually relatively simple and clean, though, compared to some other things I've seen.
I hate to be an armchair expert, but I'll do my best to give the _counter_ opinion to "this is a model of a good startup stack".
If you're looking to build a web app for a business on a small team, some guiding values I've found to be successful (that feel counter to the type of values that lead to the stack in the article)
1.) Write as much "Plain Ol'" $LANGUAGE as possible[2]. Where you do have to integrate with the web framework, Understand your seams well enough that it's hard for your app _not_ to work when it receives a well formed ${HTTP/GQL/Carry Pigeon/Whatever} request
2.) Learn the existing "boring" tools for $LANGUAGE, and idioms that made _small_ shops in similar tech stacks successful.
3.) Learn $LANGUAGE's unit test/integration test framework like the back of your hand. Learn how to write code that can be tested, focus on making the _easy_ way to write code in your codebase to be to write tests _then_ implement the functionality
4.) Have a strong aversion from adding new technologies to the stack. Read this [1], then read it again. Always be asking "how can I solve this with my existing tools". Try to have so few dependencies that it would be hard to "mess up" the difference between local and prod (you can go a LONG way with just Node and PostgreSQL).
Some heuristics to tell if you're doing a good job at the above points,
1.) You don't have to prescribe Dev tool choices (shells, editors, graphical apps, git flows, etc)
2.) You can recreate much of your app on any random dev machine, and feel confident it acts similar to production.
3.) Changing lines of code in your code base at random will generate compiler errors/unit test failures/etc
Most every real world software project I've worked on in the SaaS world ended up with "complexity" as the limiting factor towards meeting the business's goals. When cpu/network/disk etc was the culprit, usually the hard part of fixing it was trying to _understand_ what was going on.
Plain may be very successful in their flow, but I'd say most everything in this article runs counter from the ideas that I've seen be successful in the past.
[1] https://boringtechnology.club/
[2] At our shop we'd say "You're a ruby programming, not a rails programmer, your business logic is likely well factored/designed if it could be straight-forwardly reworked into a rails free command line app"
instead of innovation token I would call it attention token :) but seems people might mistake it for some crypto coin.
to me i think the reason this is rampant in the SaaS world is that many are burning through millions of VC money every month and they have to justify enormous valuations by inflating problems, creating solutions for problems that used to be solved by boring tech, and then marketing it to developers on social media which creates the illusion that if you are not on the gravy train you are somehow not an expert.
always fascinating to see just how much marketing and politics goes on consciously and subconsciously.
I'm currently working on a large Flask App. It really should have been Django from the outset. There's a different kind of hell in having the almost but not as good version of everything.
As hit and miss as my azure experience has been… azure function core tools, vscode, and azurite were excellent for purely local development.
I am doing a lot of data-engineering and serverless is an absolute god-send. I have set up all our data-eng infra on lambdas and on aws-batch and I am super happy. The only issue is we have a lot of waste dressources on our always running Postgres instance.
I would love to hear more about your experience with Aurora Serverless PostgreSQL. The main turn-off for me is that it only works with postgres version 9.something.
With all that said, I am working on building a framework that I would describe to the crowd here as Django for Serverless. It is in early stages but you can check it out at https://staging.cdevframework.io/. One of the main focuses is to make it easier to write code that is not dependent on running on a Functions as a Service (FaaS) model, so that your code can eventually be bundled up and deployed on a traditional server when a FaaS platform becomes uneconomical.
Sorry, I needed to vent :( But my point is, sometimes it's easier to avoid building the things that your team doesn't know how to build. And maybe you can't hire the right people to build it for you. If they're all comfortable with their severless stack, they're far better off than the other company I'm talking about.
Oh god is it Monday morning again?
Then yeah AWS or any US stock exchange listed, centralized cloud host would be deemed a platform risk but for majority of us it is not an issue.
Nevertheless I am cognizant of the political risks translating into various platform/financial issues for such groups but it's always been this way when new mediums of information exchange are introduced. Perhaps a short time during the early days of the internet such idea was a reality but we've gone far past that, regulators and political interests have caught on and it would take a very robust , decentralized and scalable mesh network to pull it off successfully.
If this is one of your concerns maybe you should rethink what you're doing.
Working at a multi-billion dollar company we don't have those worries, so we're 100% in on AWS.
Vendor lock-in is a real thing, but vendor lock out is usually a vendor issue.
They did accidentally release documentation about "Lambda Function URLs" at the end of last year, and then pulled it back. So perhaps some of that is coming.
Magical
For example, before a satisfying answer on why a serverless implementation is better, the article delves straight into tests, and how things are complexified because the serverless architecture means more hinges, glue code, moving parts which now need to be tested. Since this particular serverless interpretation involves relying on Cloud as a Service, a hermetic local development doesn’t exist, and so they build a solution around interactive serverless development.
This article is a solution seeking a problem. Many of the solutions here seems like caveats of pursuing this serverless stack, rather than a feature. This is enlightening and useful to know what’s needed to get a fully integrated serverless DevEx, but it doesn’t motivate why people should elect serverless versus a monolith.
> We need to make sure that the few engineers that we do hire can make the most impact by delivering product features quickly while maintaining a high quality.
> This meant being able to run our services at a scale at a low cost without needing to have a whole department just looking after a homegrown infrastructure.
> as well as how we’ll be able to build a successful business in the next 5 to 10 years without needing to do foundational replatforms
All these points can be true with a monolith. If anything, these points make a monolith seem viable.
There’s been article posting around of using Postgres as a hammer for everything. If anything, vendor lock-in to server less platforms will likely involve replatforms in 5-10 years.
> serverless applications. One of the main differences is that you end up using a lot of cloud services and aim to offload as much responsibility to serverless solutions as possible
This contradicts the desire to avoid replatforming.
> Having personal AWS accounts allows each developer to experiment and develop without impacting any other engineer or a shared environment like development or staging. Combined with our strong infrastructure as code use, this allows everyone to have their clone of the production environment.
Again, I’m not sure how this is a feature. This adds another incidental complexity to debug. Engineers may have different permissions and roles. Alternatively, if the engineers are going to have the exact same “environment” as the deployed environment, then the individual accounts don’t provide much value.
> full-stack TypeScript ... The ability to move between the frontend, backend, and infrastructure code without having to
This is an orthogonal concern to serverless.
> developer features... this In isolation without impacting other engineers due to everyone having their own AWS account.
This is orthogonal to serverless. In fact it barely seems like a concern of infra. If local environments exist, developers can just have their own local development branch in Git. They can also test and deploy in separate remote branches. Here, the problem was created because local development is against live services.
Typically these developer blog posts are intended to evangelize the company. Personally I find many red flags in the company culture. Seems like a fun and cool place if you want to try out the latest and greatest tech trends, and have a chance to scratch your own engineering itches. But if you care about job security, you’d want to understand what is the business impact of all this research and development?
A website doesn't need a server! You can host your website assets on a CDN and all of it's then loaded without hitting any of your servers.
Our architecture roughly looks like this: - React application hosted on Vercel - AWS API Gateway routes API requests to Lambdas - The Lambdas then read/write data from RDS Postgres, DynamoDB and also maybe publish events to EventBridge. - We have many smaller Lambdas that listen to events happening on EventBridge and do other things, such as sending notifications, etc.
I think if you're able to be build a non-trivial application with microservices then you're also able to build one just on lambdas.
> "Good magic decomposes into sane primitives."
and
> "If it _isn't_ composed of sane primitives, steer clear."
This feels more like the latter.
I'm actually quite skeptical of this claim. Learning a new language isn't really a big deal unless you are using relatively "esoteric" stuff like clojure or datalog which really require an experienced consultant to train your team.
With AWS Chalice, we've been able to ship production scale code (for govcloud) in Python without any one of us breaking the environment by simply using separate staging. We were able to get PHP/Javascript developers to use it with barely any downtime. In fact it was more or less appreciated from the clean and simple nature of Python right from the get go.
This feels like way too much engineering from the get go. Here's my workflow with AWS Chalice and its super basic (I'm open to improvements here).
- checkout code from github
- run localhost and test endpoints written in python (exactly like Flask)
- push to development stage API gateway
- verify it is working as intended and this is when we catch missing IAM roles, we document them. if something is wrong with our AWS setup (we dont use CDK just simply use the AWS console to set everything up once like VPC and RDS)
- push to production stage API gateway
All this shimming, typescript (rule of thumb is > 40% more code for < 20% improvement through less documentation and type errors, only really valid in large teams) separate AWS developer accounts seems overkill.
The one benefit I see from all this extra compartmentalization is if you are working in large teams for a large company with many "clients" (internal and external). you are going to discover missing IAM roles and permissions anyways and is part of being an implicit "human AWS compiler trying different stackoverflow answers" since you are locking yourself into a single vendor.
Some positives I see are CDK but if you are deploying your infrastructure once, I really don't see the need for it, unless you have many infrastructures that can benefit from boilerplate generation.
Happy to hear from all ends of the spectrum, serverless-stack could be something I explore this weekend but there's just so much going on and I'm getting lot of marketing department vibes from reading the website (like idea to ipo and typescript to rule them all) and to top it off going to https://docs.serverless-stack.com/ triggers an antivirus warning about some netlify url ( nostalgic-brahmagupta-0592d1.netlify.app) what is going on here???
seems like the issue has been fixed since but it was quite strange. my AV would not let me load the website (similar to how it blocks some websites) as it would detect that netlify address, if you could investigate this that would be great.
will follow up with an email when i get the chance.
It's not a complete mirror image of what you get but it's close enough in my experience.
You could very well use those integration tests to write Architecture Fitness Functions (which is implicitly what part of the test do) but having developers write those functions instead of architects risk the architecture decisions not carrying over to those tests.
For small use cases like this (s3, API gateway + lambdas, sqs, sns, elasticache) which although having many moving they come together more or less cohesively (they all depend on each other for a single feature), if your use case grows, by having ETL jobs, more async workloads, etc... You can start having issues with architecture tests not really covering all bases.
Regarding leaking so much cloud abstractions to the developer, I am not really sure what to make of this. I think the arguments of cognitive overload might well be valid, however a basic understanding of the target infrastructure is necessary for the developer to build applications effectively. In this case I think it works because they go all in on leaking abstractions, going as far as to provisioning a separate account for each developer. What won't really work is to have abstractions built upon abstractions to make developers think they are using the cloud (by emulationg the stack locally) when they aren't actually using it. So in this case being consistent is more important, either don't leak any cloud abstraction or go all in.
It's the epitome of a "serverless" architecture.
Development/deployments is so much simpler and for a business with money, the price difference is negligible. You can dev/test locally, not to tied to a provider, essentially just another boring web app.
However for personal projects I’ve been playing with Sererless out of interest to see if it’s ready yet, and instead of paying $10-20 a month for a VPS I pay fractions of a cent.
I develop my Lambda as a monolith application, not a lambda per endpoint. I’m told this is an anti pattern…my take is I’m just using lambda as another compute deployment target it’s fine. I use hexagonal architecture so my app knows nothing about Lambda which makes unit testing easy.
Next I wire up a very thin adapter layer that takes the Lambda request json and converts it to the required value my app needs for routing. This is at the very edge of the app. I like to use this design regardless of lambda, I can swap out any web framework easily, even build a cli frontend for testing with minimal effort. In the context of lambda, using hexagonal architecture means I can bin Lambda, replace it with a standard web framework and deploy as a standalone app with minimal effort if I need to.
With the lambda in place I have a Cloudflare worker as the entry point to the lambda. It takes a request and forwards it to my lambda. I use a Cloudflare worker as it’s cheap/free (generous free tier) and I get a cache at the edge. I’ll use Cloudflare pages or s3 with Cloudflare in front for static assets.
I use Lambda for the app instead of Cloudflare workers simply because I want to interact with DynamoDB/S3 and I can manage permissions better inside AWS with IAM. I also want to use Rust which has very fast Lambda execution times and I had a few issues with Cloudflare workers wasm which I lost interest in figuring out as only experimenting. As I’m fronting with Cloudflare I’m also extremely dogmatic on cache headers from the lambda and propagating them to reduce calls to the origin/lambda.
The end result is reasonable performant. It’s fast but not the fastest as expected with the hops/latency, it’s extremely cheap. A small pet project may be single digit cents if even that. It’ll also handle large volumes of traffic, easily, without worrying about provisioning issues.
However, I have to jump through to many hoops to get what I have, more than what I’d like to do on a professional project. The orchestration is complex and it feels like what I save in $$$ I pay for in slower dev time jumping through hoops to get the absolute lowest cost. I enjoy this stuff and it’s a personal project done for education, still, I’d be hesitant to go this way for a real payed job as interesting as I find it.
Also pay as you go is great when it’s costing fractions of a cent, it’s also terrible in it opens you up to a new attack vector, DoS’ing your service which has unbounded pay as you go services then waking up to very large bills. Always build in rate limiting for services you use with on demand pricing.
>For example, Plain’s January bill for our 7 developer accounts was a total of $150—pennies compared to the developer velocity we gain by everyone having their clone of production. Typically, the largest cost for each developer is our relational database: Amazon Aurora Serverless v1 PostgreSQL. It automatically scales up when it receives requests during development and down to zero after 30 minutes of inactivity.
I don't understand this at all. If "7 full production instance cost $150" then your application is tiny and you don't need 15 AWS services. The storage costs alone should far exceed that for large application. If a $150 production instance is "scale" then we're lost as an industry. If you application is this tiny, Just. Use. A. Server.
Did you miss that this is oriented around serverless? It doesn’t mean their application is tiny, it just means they can scale down a long way. Which, given that they are using serverless, is unsurprising.
Sure, if you are talking about dedicated EC2 instances or something, then a $150 “production” instance is tiny. But that’s not the situation here. $150 for developer load on serverless doesn’t correspond to $150 for a full production service.
DynamoDB and Amazon Aurora are serverless storage solutions but you still pay for the data you store. It is highly surprising their total production storage cost is less than ~$21. For reference, a few terabytes of data in DynamoDB costs thousands a month.
This is a monstrous misassumption by Plain. For instance when I was trying to work with Bitcoin chain data, I would be looking at close to 1TB worth of data that I needed each time to replicate the complete application and the bill was something like $500 just to keep it in EBS.
So with 10 developers, we would be essentially paying somebody's salary and I wonder if the problems listed in this article is so critical that it would be worth that much.
None of our engineers have so far managed to generate gigabytes of data or millions of requests like our production account :)
I had the same feeling when we reached the pinnacle of complexity of Windows programming with COM/DCOM/ActiveX/.NET/WinForms/Silverlight/Visual Studio... All of that felt like necessary progress. Yet, a simple script piping text output into a browser via CGI felt like a breath of fresh air. We need this for web development now.