So congrats on the funding, I hope you can convert some haters :)
[1] obligatory plug: https://notify.run
Don't get me wrong; from a resources management and scaling perspective it's great. But that doesn't outweigh the massive pain during development that it creates.
We're in the process of rebuilding a project to move everything out of a Serverless Architecture. After six months of building it on serverless we finally all agreed it was a big mistake.
[1] That's our experience with AWS. May be different with other providers.
[2] I recognize that Serverless Framework helps mitigate this but that's just yet another abstraction on top of abstractions in my opinion.
I think the issue is some people see "serverless" as an all or nothing scenario which it really isn't. Some problems are solved well with serverless and some are not. It's like the container vs virtual machine argument. One isn't designed to replace another - they're just different tools for solving different problems; each have their own strengths and weaknesses.
But then... any architecture benefits from localhost-first.
https://www.markheath.net/post/remote-debugging-azure-functi...
So what if I gave you a command-line tool to simply make the current code in your development environment live?
Want a debugger? Nope.
And I also gave you live step debugging, plus replays of recent server state?
Want log files? Gotta get them from a different service that frequently has a lag of 30 seconds or more.
And the logs from the instance you're debugging are available instantly?
No-worries devops experiences are nothing new. See Heroku.
That aside, I agree that there are a lot of secondary concerns that are important when running things in production but that aren't available out-of-the-box when you run something on AWS Lambda. I'm thinking about error monitoring, performance monitoring, logging,... All those things need to be set up and that's quite time-consuming.
However, I think that's more due to serverless being relatively new and not as mature as the traditional way of doing. I don't think it will take long before we'll have the equivalent for serverless of adding `gem 'newrelic_rpm'` to your Gemfile and magically having performance and error monitoring across your app.
That said, where I work (MaaS Global) we have a production PostgreSQL database hosted on AWS Relational Database Service (RDS):
https://aws.amazon.com/rds/postgresql/
We connect to the AWS RDS instance in our lambda functions using an ES library called knex.js and some environment variables to store the DB credentials:
Exactly the same way you would with an EC2 instance...
Serverless is nice, but the ecosystem of serverless tool is really missing today IMO
I don't think we'll be considering a fully serverless architecture anytime soon due to cold-start times, but it's awesome for anything outside of the user request/response loop or internal microservices where response time is perhaps not such of a problem.
[1] https://aws.amazon.com/premiumsupport/knowledge-center/inter...
For me serverless is pay-as-you go pricing, no over- or under-provisioning and last but not least, no server-management.
Lambda, DynamoDB, S3, AppSync, Device Farm, Aurora, etc. are all serverless.
As a consumer, I essentially get to treat it as such - there is no server I need to manage.
In AWS-land this is a big differentiator over a system I have to manage, such as EC2, that just executes containers (meeting FAAS). Now I have FAAS and I don’t have to manage the infra, which is huge because AWS will be far better at meeting a patching SLA than I will be.
Yes, obviously there is a “server”, but I don’t have to think about it.
That is - a piece of code in a monolith may fail, and a piece of code in a microservice may fail, but I've already written the code to handle network errors for the microservice case, which means I've also implicitly handled the "bug in code" case.
And the ones you aren't responsible for are ones that the rest of the world uses every day at 1000x the scale you do.
In your opinion, it easy to differentiate between dev/prod environments for development? How about logging?
Differentiating dev/prod is not too bad, everything is labelled based on your naming scheme for serverless.
Having said that I think lambdas are a great use case for cron jobs.
because it's purely a marketing concern, it means nothing from an architectural standpoint, it"s a buzzword, like 'cloud', 'nosql', 'web 2.0' or 'the blockchain'.
Serverless computing is an system architecture design that focus emphasis is abstracting the entirety of the infrastructure to let developers focus solely on code.
Per say , with AWS Lamba + API Gateway + S3 developers can create Web Application that usually required EC2 Servers and web framework like Spring , Laravel etc...
Apps hosted on AWS with EC2 requires strong knowledge in system architecture and system design . They also are quiet complex to scale , monitor and manage.
Serverless abstract all this and lets you focus only on code.
AWS Lambda is self healing meaning when a function crash , only that function crash not the entire server , API Gateway is managed by AWS it won't crash ( or very unlikely ) S3 , as i'm concerned , as never let me down.
Meaning I could author an entire application similar to Hacker News without any "Server" logically speaking, technically speaking there is always a server just like NoSQL doesn't mean "NO SQL" but "Not Only SQL" and is not buzzword :)
Every technical person understands there's still a server there. So it seems like a marketing tactic intent on misleading clueless CEOs.
Originally it was a marketing buzzowrd designed to make Amazon's FaaS offering seem like a bigger deal than it was, and somewhat misleading in that role because FaaS of the type it was applied to aren't any more serverless (even from the perspective of what the customer needs to manage) than common PaaS offerings.
OTOH, I kind of like the way Google seems to have adopted it as a blanket term for cloud services where the customer is not concerned with physical or virtual servers as such; it seems the term.is being wrestled into being descriptive and non-deceptive.
> Every technical person understands there's still a server there. So it seems like a marketing tactic intent on misleading clueless CEOs.
The non-existence of servers isn't what it communicates, only technical people who also lack any sense of what matters to business world think that. (It's also targeted more to CTOs/CIOs than CEOs).
Everybody also knows that a wireless vacuum has wires inside it, the value prop is that the wires never get in your way. And so with serverless.
Instead of a couple of servers you have thousands of small virtual servers running on real server and call that serverless. It's comical.
I wonder what hell this approach will be for legacy serverless systems where there is stuff running everywhere and no one got a clue where to pull the plug or patch stuff.
It’s the name. It infuriates people because obviously there still really are servers. But I think of it like WiFi - there still are wires, you just don’t see them.
With wireless networking and even phones, the wires have actually been eliminated, replaced wholesale by something else, radio. A better analogy would be something like the powerline ethernet systems, but nobody is calling them something controversial like "wireless" or "cableless".
Put another way, I've seen cloud computing (of which "serverless" is, arguably, merely an evolution into increasing levels of abstraction) called "somebody else's servers". The equivalent with the Wifi analogy would be "somebody else's wires", and, usually, what wires there are, for the backhaul [1], aren't even somebody else's.
EDIT:
Ultimately, my point is this:
The analogy is weak becasue WiFi is not an abstraction layer on top of wired networking that merely hides the existence of (and, ideally, some of the downsides of dealing with) the wires, instead being a different technology with different upsides and downsides. Serverless is such an abstraction layer.
[1] Which brings up a nitpick: Wifi can remain even wireless in its entirety, with various mesh networking techniques.
I've been running a small service similar to their new "Serverless Platform" for some time and was approached by them in 2014 to see about joining their team.
Ultimately I ended up deciding not to join because I wasn't convinced there was a strong enough engineering presence in their leadership to make a good product. The next couple of years should be interesting to watch if they can actually build a profitable product.
That seems like a strange requirement. Ultimately for a business to be successful there has to be people that know business, marketing, and slaes. If the leadership team is all hard-core tech engineers, there will be a lack of all of the other social and fundamental business skills needed.
The other side here is your customers are likely engineers themselves. You need to build products that connect with them and genuinely make their lives easier... if your leadership is too far removed from that you’ll end up with a product platform shapes via a game telephone...
With that said, some strong hires early on can make a real difference here.
Just like now is to people who were programming in the 1990s, or the 1990s to people who were programming in the 1970s.
But that is the price of agility. Serverless is just another abstraction on top that increases agility at the price of increased compute.
To take it another step, pretty much none of them understand how the JVM or V8 run their code.
This was a big selling point of virtualization, originally. It was certainly true for environments that suffered from poor utilization due to, say, running one app per (often oversized and/or antiquated) physical servers, as I believe was common for enterprise IT shops.
Whether this improvement could have been achieved by other technical means (at least in non-Windows environments) is debatable. It's also unclear what percentage of total hardware utilization enterprise IT accounted for back then, and I suspect it was much higher than today.
For other environments, where virtualization would replace simple Unix time-sharing, it stands to reason that hardware utilization had to go up, if only moderately.
Interestingly, enterprise IT practices are still so extremely expensive today that moving to a cloud provider is obviously cheaper for them.
The amount of waste already present would boggle your damn mind.
Non relational data should be stored in a non rdbms. Key-Value stores like Redis are immensely useful as caching layers (but they offer so many more features). Graph databases can be used for data with complex relationships that are not easily modeled. They are also good for seeking strong correlations between related items. (think person A. called person B. called person C. (palantir type searches).). Searches can be done way more effectively in a specialized index, like an inverted index used by lucene/elasticsearch, which also supports things like stemming, synonyms, and numerous other features. These are all "NoSQL" NoSQL is not just mongodb (which isn't nearly as bad as people make it out to be btw).
Even traditional RDBMS are seeing an influx of NOSQLesque features. Like JSON types and operations in postgres.
The reason "NoSQL" dbs got popular are because in my experience monolithic large relational databases are hard to scale, and manage once they become too complex. When you have one large database with tons of interdependencies, it makes migrating data, and making schema changes much harder. This in my opinion is the biggest issue (moreso than performance problems associated with doing joins to the n-th degree., which is also an issue.)
It also makes separating concerns of the application more difficult when one SQL connection can query/join all entities. In theory better application design would have separate upstream data services fetch the resources they are responsible for. That data can be stored in a RDBMS or NOSQL, but NOSQL forces your hand in that direction.
As it goes for serverless, this just seems like a natural progression from containerization, I'm interested to see where the space goes.
Personally I think it's foolish to put your head in the sand when the industry is changing, or learning new concepts.
I've met a lot of people whomst thought they had to scale that big. Very few handled anything that couldn't run off a beefy postgres installation.
The purpose of a system is what it does. People don't use nosql to scale because they don't need to scale, so what does it do? People use nosql to not write schemas. That's what it's for, for the majority of users.
If I need a key value store, I use a key value store. There's no flashy paradigm there. If I need to put a container up on the interwebs, I do it. What's serverless? Nosql is an "idea", "paradigm", "revolution", or at least the branding of one. Just the same, serverless.
I will continue to ignore nosql and serverless.
The industry sure does change, but do you know how much of that is moving in a real direction and how much is a merry-go-round? Let's brand it "Carousel" and raise 10 million. And in 20 years we can talk about serverless being the new hotness, again.
DB2 on z/OS was able handle billions of queries per day.
In 1999.
Some greybeards took great delight in telling me this sometime around 2010 when I was visiting a development lab.
> When you have one large database with tons of interdependencies, it makes migrating data, and making schema changes much harder.
Another way to say this is that when you have a tool ferociously and consistently protecting the integrity of all your data against a very wide range of mistakes, you have to sometimes do boring things like fix your mistakes before proceeding.
> In theory better application design would have separate upstream data services fetch the resources they are responsible for.
A join in the application is still a join. Except it is slower, harder to write, more likely to be wrong and mathematically guaranteed to run into transaction anomalies.
I think non-relational datastores have their place. Really. There are certain kinds of traffic patterns in which it makes sense to accept the tradeoffs.
But they are few. We ought to demand substantial, demonstrable business value, far outweighing the risks, before being prepared to surrender the kinds of guarantees that a RDBMS is able to provide.
I've yet to work on a system where NoSQL was I was like "thank goodness we didn't use a structured database!". Instead, every time it's been the HIPPO trying to defend the decision while everyone else just deals with it. NoSQL seems to be taking a giant loan... You're going to need to organize and parse your data at some point (or why would you keep the data?). Putting that decision into the future just makes it harder on everyone.
The very few pure schemaless databases that continue to exist and where I'm convinced they will continue to exist for a long while are those that specialize a lot (ie, Redis, Elasticsearch, a lot of the Timeseries databases).
You'll definitely be able to ignore it and it probably won't be used in smallish companies for ages.
It's just an easier way to to get your application to scale than homebuild docker images were.
Why do you say this? I feel like this would be very useful for smallish companies. I'm running eng for my 3 person startup and looking into using Lambda-based microservices with Serverless for our next project. My goal is to completely minimize devops time for our engineers, as well as reduce cost compared to PaaS services.
a developer still has to understand the implications of resource consumption etc. For performance-critical pieces of code, IMO it's better to have direct access to the hardware - I had a recent first hand experience with this debugging NUMA related performance issue.
i mean, CGI has always existed. This serverless hype is basically rebrand of CGI with some fancy orchestration around autoscaling across boxes (which, tbh, isn't really that much work, and most people don't need the scale required to make this feasible anyway).
The use of "Serverless" is not having to deal with an "IT" guy at all who complains about setting up your app cause you updated the STACK and now it collides with everything else on the same server. Also makes it so you don't necessarily have to use containers.
Docker doesn’t replace the need to know how VMs work. Containers don’t magically allow you to scale to infinity (Although k8s shows a lot of potential). And you probably should be using PostgreSQL instead of NoSQL unless you’re absolutely sure you’re smart enough to know why PostgreSQL can’t work for your use case.
Serverless is great if you want to replace a cron job, the value of the function firing is substantially higher than the cost to run it (“margins are crazy high, optimize to VMs later and ignore the AWS bill for now”), or you’re executing untrusted code in isolation for customers.
That said, I’m not sold on serverless.
Testing is hard, the more AWS shit you tie yourself to the harder local testing and development becomes. I picked up a lambda project another developer started and asked them how they were testing and developing locally. Answer: They deployed a new update for every change (!?)
Debugging: Look at log files...
Also, at some point serverless added some autocomplete crap to my .bashrc file without asking which I will never forgive them for.
Deployed our first production tool with it and it's been working great.
I was looking at these two recently and ended up going with chalice as the docs seemed a bit simpler and more readily accessible.