I have tried building an API using API Gateway <-> Lambda, but had to choose between using DynamoDB to store data (no-SQL, so challenging to query) or suffering unacceptably long response times whenever a request happens to cause a cold-start. Theoretically, this problem is now going away!
You could always have fast startup with Lambda + database outside the VPC.
"A database server was found with an open port exposed to the internet and no or poor authentication, all records were exposed."
This also should mean that Lambda's can get stable public IPs through a VPC for firewalls as well.
*edit for must to most.
I think it’s best not to expose the DB to outside connections in general, although it is still possible [1] when using RDS instances.
I think this is different for things like DynamoDB because, instead of a standard SQL-like db “connection”, they use AWS role-based auth for each request.
Of course, one could always configure some type of proxy service between the lambda and the DB... but that seems antithetical to going “serverless” in the first place.
[1] https://stackoverflow.com/questions/45227397/publicly-access...
Edit: I thought it was not possible to expose an RDS instance outside of a VPC, but I was wrong (you can place it in a public subnet, linked in [1]).
AWS is a beautiful mix of business and technology, it's very rare to see such a large engineering-driven organization managing to balance customer friendliness. I'm an unashamed fanboy
As far as I know this was only an issue for legacy architectures.
Stuff like this is pain in the ass, it was a major problem
Overall, definitely a big win!
You can connect API-Gateway with other services via Velocity templates, which don't have cold starts.
AppSync also doesn't suffer from cold starts.
Both are also serverless services.
Lambda is good if the other solutions are missing something, so you can drop it in quickly, but I wouldn't use it as the go to services for that...
There has never been a guarantee of environment reuse. Any architecture which isn't capable of incurring cold starts is not a good fit for serverless.
How many lambdas do you keep warm? 5, 10, 20? Every new connection is a new lambda instance. You're still just delaying the inevitable.
Just use Fargate if you want to stay serverless and don't want the cold start times -- well at least before today.
There are some workarounds that using multiple lambdas, but they have their own gotchas.
Still, hooray, this is good news. The Data API is great for Serverless Aurora, but I can't use that with BI tools.
Going to AWS to save money on resources is about like going to the Apple Store to buy a cheap laptop.
AWS does tons of stuff around VPCs....I feel like they really want me to use them (or their customers really want to use them), but I just don't see why.
I just run RDS on the internet. I don't have to muck with the complexity or cost of NATs or peering or Lambda slow start or any other weird networking issues.
I know it's "public", but that seems irrelevant in the era of cloud services. This isn't any different than, say, how Firebase or a million other services run. Should I be concerned that my Firebase apps are insecure because someone isn't overlaying a 10.* network on them?
EDIT: I should clarify that I understand the legitimacy of security groups, especially for technologies that weren't meant to operate outside a firewall. But that's mostly a different subject; AWS had security groups years before VPCs and subnets and NATs.
My grandchildren are still going to be NAT'ing.
The attitude has two things in AWS interest: 1) keep lock-in by encouraging customers to build AWS-internal networks 2) don't scare away the lift-and-shift customers who want to transplant their 1990s style "intranet" (or mental model, at least) onto AWS.
Explains also why they aren't very keen about IPv6 because that would encourage internetworking.
Just don't tell anyone that you can access the AWS console from the internet :)
But that's a good point.
I suppose all the services I use already have security models (usually more complex, multi-user ones, so agent X can read but not modify, etc.).
HOWEVER...this could be solved with security groups, but it seems that's not the model AWS has emphasized. Security groups are orthogonal to NAT and private networks; AWS had security groups before it had VPCs.
Also, VPCs are really useful if you have many systems and services(yours or theirs) inside AWS.
Or MySQL. Or SQL Server.
Isn't that a core idea of Firebase? Or Dynamo?
VPC is a very convenient fit for enterprise customers extending on-premises networks into the cloud, I think that's the market it's mainly focussed on.
> I know it's "public", but that seems irrelevant in the era of cloud services.
It's not irrelevant, but neither is it necessary critical all the time; there doesn't need to be a one-size- (or even one-shape-)fits-all universal approach to network security, and AWS encompasses a lot of different customer setups, including enterprises for which it is a virtual extensions of the on-premises internal network.
So there is always a priority towards things that cause more lock in like VPC.