Cost also includes the orchestration and such
The guy giving the test saw it as his duty to tamp down our excessive enthusiasm. After explaining how the test worked he then told us how a recent study showed that programming was the 10th most boring job in the US. He went on at some length. This was back in the days of COBOL running on mainframes so there was probably something in what he said. At the time I supposed he was trying to provide consolation if you got a bad score. Be that as it may, the "stat" stuck with me for the rest of my life.
We've been looking at lambdas or FaaS for quite a few things and... we kind of always end up with weighing loads of operational effort versus just sticking a method into a throwaway controller on the existing systems. Is it somewhat ugly? Probably. Does it reuse the code people maintain for that domain anway? Very much so. Would you need a dependency or duplicate maintenance of that model code between the FaaS code and the monolith anyway? Most likely.
All in all, we mostly have teams utilizing Pactbroker to work out their APIs usually, deploying decently chunky systems in containers and being happy with it.
The one thing we really do on-demand compute like this is some model refinement. Because here a tiny service can trigger the consumption of rather awesome resources for a short while.
The whole thing could've run on a single low end VM and used RDS for the database. Using a more standard framework, it would've been cheaper, faster, and easier to maintain.
Serverless can be done right, but it's rare...
sure, that OS patch was auto updated, but that vuln in our deps that showed up over the weekend did suck in our time.
We lost productivity, the cloud credits ran out and so did our good folks.
For example, if scaling various parts of your application independently is a hard requirement, you're going to want to follow some set of these practices. Otherwise you're probably not going to see any practical benefits relative to the additional complexity.
However, Serverless has its flaws. I wrote a post a while back where I talk about the current landscape. It's a bit out of date, but if there's interest I might update it and post it here.
When life hands you executives with zero accountability, make lemonade I suppose.
For example, anybody with hard security environments (finance, healthcare, defence) needs to make sure that some services aren’t available on the public internet but are capable of communicating with internal services.
This kind of networking requirement is essentially tied to the application layer - only application development organisations can know whether e.g the service to check why a user is banned should be on the public internet.
Such a decision could be expressed by app developers using infrastructure as code, UI toggles or physical buttons in a DC - but it has to be their decision.
Similar logic applies to DNS, data isolation, backup policies, caching strategies and much more.
Like I use serverless for a handful of async tasks that can run in background. Common things like image resizing, processing data to make metadata tags or something, or some things that I would have originally used Kafka for. With serverless especially if I’m already on aws for example I have access to the whole ecosystem. Once something completes it can just write data to some store. Done. I don’t need to lose my sanity fixing Kafka event failures and other bs.
Core logic goes into monoliths or services. Anything that can be offloaded that’s an atomic operation goes to serverless from either mono or service.
But it's wild, our tech stack at the company I'm working at is Typescript all the way down. Frontend, Backend, Infrastructure is all typescript. Our team loves it. We also use serverless. Given the perspective in this thread I'm expecting downvotes but we love that too.
I think serverless is just like everything else. If you implement it poorly it can be slow, expensive, and pointless. If you implement it well it can be cheap, fast, and keep your team lean.
> Given the perspective in this thread I'm expecting downvotes but we love that too.
It's a sad commentary that this is the case on HN. People using down votes as lazy disagreements against an opinion they don't like is fundamentally anti hacker IMHO.
I wouldn't go all in in a stack which only worked for example on lambda, but I don't think that's the way most people are going anyway
Unfortunately some are. And not only that, but also full steam ahead on a 100% locked-in, read _all_ the AWS (serverless)services, AWS-only stack. I speak from unfortunate experience.
I'll take just the Infrastructure as Code section since I know the most about it. First it says Infrastructure as Code was a bunch of DSLs like Ansible, Chef, and Puppet. No, those were the second wave of "Configuration Management". Some folks might include them in "Infrastructure as Code", but I wouldn't. They are tools to manage configuration of traditional servers. They have some IaC capabilities, but they are very limited and not really the core of what these tools provide.
Terraform and CloudFormation are the real exemplars of "Infrastructure as Code". They take a declarative, relatively static approach to defining infrastructure. That's honestly fine for most cases. But I'd lump the AWS CDK and Pulumi in here too. They aren't something new, they just bring the ability to use more traditional languages to the game. But they don't fundamentally provide more flexibility than traditional TF or CF, because they are still bound to the static plan/apply cycle.
I'd not heard of "Composition as Code" until reading this article. But to the extent its meaning is clear from this context, the author isn't making any sense. The author says CaC allows developers to use "finer grained" infrastructure specification than IaC, but if anything, that's the opposite of what they do. By abstracting things into broader chunks of multiple pieces of infrastructure, they are by definition working with "coarser grained" infrastructure.
Anyway, as folks here point out, the movement towards highly specific components rather than actual code for building cloud-hosted services just trades one set of problems (scaling; dealing with VMs; having sysadmins) for another (business logic spread out over dozens of individual "components"; impossibility of rapid testing and troubleshooting; requires massive supporting observability infrastructure to make sense of).
Is there any reason why there's a "server" or "serverless servers"? Why everything isn't just apps talking to each other? I think no, there's no real hard reason but just its how evolved as the early on the connectivity and device power was drastically different for devices that users had and devices that were needed to process data and stay on forever.
At this day, a JavaScript code can be the engine on a desktop machine, on a mobile app, on HTML client page and on a backend server. Actually, with WASM and some other tools any language can do the same. Its even possible for everything to run in a browser that is available on every platform and all that running in decent speeds because our computers are all supercomputers and the significantly different high performance computing is happening on GPUs.
Yes, you can not run compute processes without compute. A server is where that compute happens. Serverless is a concept that your function will run on "some" compute, that you don't have to manage. It does, however, require compute. So running that function on your mobile device, a server, a fridge, all within the realms of possible. You still, in the end, need compute to execute the code.
Decentralized compute (running on your devices or someone else's) is what serverless is (only, it's wrapped behind a service offering so it's not completely decentralized).
Whatever it is, just drop it and use computers that can execute some high level languages like JavaScript to perform directly useful actions like storing information in embedded DB like SQLite, reading information, transforming information, transmitting information and displaying information - stuff that can be done with huge performance even on 5 years old mobile phones.
So all that "serverless" stuff is basically that but its using traditional server software behind the scenes to provide maintenance free interfaces to do all the things I mentioned. Another aspect is that the client devices may connect to server resources and directly manipulate data without an intermediary code on the "server" and you still don't need specialised hardware to handle it, its handled cryptographically using algorithms that any computer can run.
The problem with it is that its proprietary non portable software that locks you in. Instead of that, you can run the exact same software on every computer(in the datacenter, at the company building, in the hands of people etc). Bottlenecks occur in certain situations, so you don't store all your clients information on every machine and the problem is solved. You run the same platform everywhere but each machine operates with an algorithm suitable for its role.
Modern consumer computers are beasts, they are capable of processing huge amount of data. Its common for handheld devices to load megabytes of of JavaScript, compile it, render graphics with it, handle inputs and send data tens of times per second to multiple servers as the user is interacting with it. I don't believe that a device capable of doing that will have hard time storing and fetching from an SQL database with a few thousand rows.
If we were to solve, once and for all, this ad-hoc, client-client connection system, P2P communication does require trust that many applications simply cannot tolerate. Game servers exist to validate client input, register actions, and distribute them to the players in the game, banking apps verify you are who you are and allow the movement or viewing of money, etc.
I don’t see a future where we fully leave the world of central servers behind but I do see one where we value P2P more and create public routing systems that enable this more freely.
Besides, even if the client-server architecture is to remain, I expect to see significant simplifications on the algorithm deployments. By that, I mean server apps becoming easy to run and manage as WhatsApp on iPhone.
No more complexities related on running python and all its libraries on Linux on some datacenter. Instead, it should be possible to have an app that just runs and takes care of some high level code that is possibly generated by AI.
For example, maybe in near future we will be able to tell some LLM to create an app that does something useful for us(i.e. track orders on Shopyfy and send personalised questionnaire about the delivery) and deploy it to anywhere(own server at home, Amazon, DO etc.) and never think about the complexities of deployment and only deal with the value added part of the process.