For context, while exploring the load testing tool Siege running on a VPS, I was able to bring down multiple sites running on shared hosting, and some running on small VPS by setting a high enough concurrent number of users. This is not a DDoS, but it goes to show how easy it is to cause damage. Note: I only brought down sites that I own, or those of friends with their permission.
What tools are useful in fighting DDoS attacks and script kiddies? Mention free and paid options.
What are the options to limit damage in case of an attack? How do you limit bandwidth usage charges?
There was a previous discussion on this topic 6 years ago https://news.ycombinator.com/item?id=1986728
The simple advice for layer 7 (application) attacks:
1. Design your web app to be incredibly cacheable
2. Use your CDN to cache everything
3. When under attack seek to identify the site (if you host more than one) and page that is being attacked. Force cache it via your CDN of choice.
4. If you cannot cache the page then move it.
5. If you cannot cache or move it, then have your CDN/security layer of choice issue a captcha challenge or similar.
The simple advice for layer 3 (network) attacks:
1. Rely on the security layer of choice, if it's not working change vendor.
On the L3 stuff, when it comes to DNS I've had some bad experiences (Linode, oh they suffered) some pretty good experiences (DNS Made Easy) and some great experiences (CloudFlare).
On L7 stuff, there's a few things no-one tells you about... like if you have your application back onto AWS S3 and serve static files, that the attack can be on your purse as the bandwidth costs can really add up.
It's definitely worth thinking of how to push all costs outside of your little realm. A Varnish cache or Nginx reverse proxy with file system cache can make all the difference by saving your bandwidth costs and app servers.
I personally put CloudFlare in front of my service, but even then I use Varnish as a reverse proxy cache within my little setup to ensure that the application underneath it is really well cached. I only have about 90GB of static files in S3, and about 60GB of that is in my Varnish cache, which means when some of the more interesting attacks are based on resource exhaustion (and the resource is my pocket), they fail because they're probably just filling caches and not actually hurting.
The places you should be ready to add captchas as they really are uncacheable:
* Login pages
* Shopping Cart Checkout pages
* Search result pages
Ah, there's so much one can do, but generally... designing to be highly cacheable and then using a provider who routinely handles big attacks is the way to go.
None of my guests have noticed this, and it has increased most of my analytics numbers as my pages are faster too.
The signed-in users, they get the dynamic pages.
But now the cookie that identifies the user is what you use to correlate any attack traffic, the attacker is forced to (somewhat) identify themselves and you can then revoke their authentication status or ban the account.
Finally you captcha and/or rate-limit the login page.
This is effectively what I do on my sites, the pages themselves and the underlying API all cache if the cookie or access token is absent.
This is trivial to do within the code, but can be harder to do with the CDN/security layer (who need to support a "vary on cookie" or "bypass cache on cookie" or equivalent).
You can imagine that for a real time service it would be better to provide a timeout immediately rather than providing stale data.
HN is an example of a near on-line site where some delay is perfectly acceptable. No one cares that they're receiving a 2 second old page, it's better for the site users to reveive old data fast rather than new data slow.
If you use nginx the following commands would help out significantly (if I remember them correctly)
proxy_cache_use_stale updating proxy_cache_lock on proxy_cache_lock_timeout 1s
This config allows nginx to fetch cache updates while serving clients and when fresh data is received from the upstream application server it'll use that immediately.
If that's wrong hopefully someone can correct the conf.
You can put the whole of HN into read only mode if needed and it'll have no real impact; disallowing purchases on MyAmazonCompetitor.com would be catastrophic.
Lots of people use page caching to speed up their website, but that's a mistake, since caching means stale data on dynamic sites. Caching should only be used to solve resource issues, not latency issues.
Your entire site should be fast already without caching. This comments page should only take a few milliseconds to generate. If it doesn't, then something's wrong with the database queries.
I will never understand how some sites take hundreds of milliseconds to generate a page.
Of course, this is precisely the attack that works on a search page, hence the advice above to be ready to captcha that if you haven't.
Anything GETable cache, everything else you need to think about how to validate the good traffic (trivially computable CSRF tokens help) and captcha the rest.
404s, 401s, etc... they should cost the underlying server as little resource as possible and also cache their result at an applicable cache layer (404s at the edge and 401s internally, 403s at the edge if possible, etc).
1. Get from cache
2. Determine if cached value is valid
3. Query data store
4. Put data store value in cache
5. Return data
Instead of just getting it directly. In order to be able to cache you need to think about good cache invalidation. And client side caching won't work against malicious users.
We have many layers of protection:
* We run iptables and an api we wrote on our ingest servers. We run failtoban on a separate set of servers. When fail2ban sees something, we have it hit the api and add the iptables rules. This offloads the cpu of failtoban from our ingest servers.
* We block groups of known hosting company IP blocks, like digital ocean and linode. These were common sources of attacks.
* Our services all have rate limits which we throttle based on IP
* We have monitoring and auto-scaling which responds pretty quickly when needed. And has service level granularity.
* Recently moved behind cloudflare because google cloud did not protect us from attacks like the UDP floods which didn't even reach our servers.
EDIT: formatting
If they attackers are persistent, there is really no way to guarantee zero down time. THEY WILL FIND A WAY. Just make sure your stake holders know you are doing everything in your power to resolve the issues, and then actually do those things.
An anecdote:
We had been seeing DDOS attacks for a few weeks, so we had most everything locked down and working. But then suddenly one of the most important parts of our site started going down under load. That part is a real time chat system. We looked for which chat room had the load and it was one which did not require a user be registered. We switched the room into registered users only mode and thought we had solved it.
About 5 minutes later the attack came back with all registered users. We were amazed, becuase there is no way the attackers could have registered that many accounts in 5 minutes because of our rate limiting on that. Turns out that they had spend the past week or so registering users in case they needed them :)
For example:
curl https://104.154.116.193 -H 'Host: www.stream.me' -v -k
Btw, check out curl's --resolve flag. You can use it to override default DNS resolution and can then drop the -k flag.
Firstly, we are built to endure any DDoS the internet has yet seen on our peering, backbone, and edge servers for CDN services. This is quite important when you are tasked with running a large percentage of the interweb but probably not practical for most organizations, mostly due to talent rather than cost (you need people that actually understand networking and systems at the implementation level, not the modern epithet of full stack developer).
But, it is critical to have enough inbound peering/transit to eat the DDoS if you want to mitigate it -- CDNs with a real first party network are well suited for this due to peering ratios.
Secondly, when you participate in internet routing decisions through BGP, you begin to have options for curtailing attacks. The most basic reaction would be manually null routing IPs for DoS, but that obviously doesn't scale to DDoS. So we have scrubbers that passively look for collective attack patterns hanging on the side of our core, and act upon that. Attack profiles and defense are confirmed by a human in our 24/7 operations center, because a false positive would be worse than a false negative.
Using BGP, we can also become responsible for other companies' IP space and tunnel a cleaned feed back to them, so the mitigation can complement or be used in lieu of first party CDN service.
In summary, the options are pretty limited: 1) Offload the task to some kind of service provider 2) Use a network provider with scrubbing 3) you've hired a team to build this because you are a major internet infrastructure.
-DDoS you can handle (small ones). That anything up to 1 or 2Gbps or 1m packets per second.
-DDoS you can not handle. Anything higher than that.
For the smaller DDoS attacks, you can handle it by adding more servers and using a load balancer (eg. ELB) in front of your site. Both Linode and DigitalOcean will null route your IP address if the attack goes above 100-200Mbps, which is very annoying. Amazon and Google will let you handle on your own (and charge you for it), but you will need quite a few instances to keep up with it.
For anything bigger than that, you have to use a DDoS mitigation service. Even bigger companies do not have 30-40Gbps+ capacity extra hanging around just in case.
I have used and engaged with multiple DDoS mitigation companies and the ones that are affordable and good enough for HTTP (or HTTPS) protection are CloudFlare, Sucuri.net and Incapsula.
-CloudFlare: Is the most popular one and works well for everything but l7 attacks (in my experience). You need to get their paid plan, since the free one does not include ddos protection - they will ask you to upgrade if that happens.
-Sucuri.net: Not as well known as CloudFlare, but they have a very solid mitigation. Have been using them more lately as they are cheaper overall than CloudFlare and have amazing support.
-Incapsula: I used to love them, but their support has been really bad lately. They are on a roll trying to get everyone to upgrade their plans, so that's been annoying. If you can do stuff on your own, they work well.
That's been longer than what I anticipated, but hope it helps you decide.
thanks,
http://www.bauer-power.net/2016/03/incapsula-had-major-world...
To be fair, they all have some downtimes from time to time.
- Every one of our servers rate limits critical resources, i.e. the ones that cannot be cached. The servers autoscale when neccessary.
- As rate limiting is expensive (you have to remember every IP/resource pair across all servers) we keep that state in a locally approximated representation using a ring buffer of Bloom filters.
- Every cacheable resource is cached in our CDN (Fastly) with TTLs estimated via an exponential decay model over past reads and writes.
- When a user exceeds his rate limit the IP is temporarily banned at the CDN-level. This is achieved through custom Varnish VCLs deployed in Fastly. Essentially the logic relies on the bakend returning a 429 Too Many Requests for a particular URL that is then cached using the requester's ID as a hash key. Using the restart mechanism of Varnish's state machine, this can be done without any performance penalty for normal requests. The duration of the ban simply is the TTL.
TL;DR: Every abusive request is detected at the backend servers using approximations via Bloom filters and then a temporary ban is cached in the CDN for that IP.
Looks like you're hosting at least some stuff at Hetzner, they're not going to do any filtering for you.
And since we are in the Backend-as-a-Service market, the name is not all that unfitting. Although it cannot be denied that from time to time some people think we are French an spelled "Baquend".
OVH include DDOS protection by default[0] and they have a very robust backbone network[1] in Europe and North America that they own and operate themselves (this is how & why anti-DDOS is standard with them).
For quick side-projects I still fire up a DigitalOcean instance or two because their UX is so slick and easy. If I needed huge scale and price didn't matter I would probably go with AWS (their 'anti-DDOS' is their vast bandwidth + your ability to pay for it during an attack). For everything else, I put it on OVH.
If you need something low-cost and dedicated try their SoYouStart range, which is just their last-generation hardware. Going from a VPS to SYS is a huge performance jump for minimal cost. They have a higher guaranteed minimum bandwidth throughput than the VPSs and may get you better support possibilities. Cost is similar to a mid-size VPS.
Be careful using services like game servers or VOIP or anything else using UDP though, since UDP is subject to much more stringent filtering at OVH and may get affected during mitigation.
On iWeb, however, they null-routed us for half a day.
The main issue is that I lost a bit of faith in their support and reliability. vracks going down for hours with no updates. Connectivity issues. Servers disappearing.
Besides that, their DDoS protection works well for l3 attacks, except that they force a TCP reset on every connection. So if you are picky about extra connect times and having your clients re-establish their connections, they are great.
I don't know what to make of the bad reviews you found, my personal experience has been great for several years now. Multiple products used, the occasional support ticket with quick response, and decent pricing.
Anecdote: I had my dedicated server suddenly go down because it overheated. Wouldn't come back to life. Two days after submitting a ticket and getting no input, the machine suddenly came back up without any explanation. A day later, I got a mail saying the motherboard was broken and got replaced. Overall it was a very unpleasant experience, but it's to be expected given the low price of Kimsufi.
We've had very few hardware-related issues, a disk failing or a motherboard to be replaced. In all of those cases, the component were swapped promptly and we've been kept informed of the progress.
Where we're unhappy is with the network, especially with their vRack offering. Looking back at our production incidents of the past 6 months, about 50% of them were caused by some vRack problem where at the same time the public interfaces were up and running just fine.
We're generally happy with customer service, but we pay for VIP support and we speak French to OVH's support agents (I believe that the latter helps a lot).
First off you need to determine where the attack is coming from. You could redirect based on IP/request headers in a .htaccess file or apache rules.
Your next bet is to distribute/auto-scale your application if possible.
You need to setup a web application firewall that sits in front of your web servers and analyzes the requests/responses that hit the web servers. A lot of the ddos campaigns are easy to identify based on the request headers/IP/Geo and requests/second.
It's not hard to write a small web server/proxy to do this, but it would be best left to someone who knows what they're doing because you don't want to block real user requests. You can use ModSecurity's open source WAF for apache/nginx, but again you have to know what you're doing.
When I faced this issue, I wrote a small web server/proxy here that you can start on port 80:
https://github.com/julesbond007/java-nio-web-server
Here I wrote some rules to drop the request if it's malicious:
https://github.com/julesbond007/java-nio-web-server/blob/mas...
For static content there is always CDN. Costly, but it works in a pinch, while you're planning you other moves.
The one thing left to worry about is dynamic content. Depending on the application you could restrict all requests to authorized users only while under attack.
This isn't a complete solution by any means, but reduced the attack surface considerably.
https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June20...
1- For small attacks you can optimize your stack, cache your content and use a provider that allows you to quickly scale and add more servers to handle the traffic. Do not use Linode or Digital Ocean as they will null route you.
OVH, AWS and Google are the ones to go with.
2- Use a DDoS mitigation / CDN provider that will filter the attacks and only send clean traffic back to you.
The ones recommended so far:
I used to get attacked huge a load of corrupt UDP packets for a few seconds and that used to hang the main server, wich in 1 or 2 minutes disconnected all my players.
Solution: separate your UDP services from your TCP services in separate applications and servers, also use different type of protection services for each.
The attack still hanged the UDP services, so I started thinking about making a plugin for snort to analyse the traffic and only allow legit protocol packets. I haven't done any of this last idea because the attackers stopped since they noticed that no one was being disconnected.
BTW, for TCP and HTTP I just used any tiny service that protects me from SYN Flood, like Voxility resellers.
If you have custom protocols, you have to get a full /24 mitigation and so far nobody can beat Arbor into it. Very expensive, but works well if you have BGP.
The only reason why they're not constantly called out by serious infosec folk for their scam is because they hire guys also involved in DefCon/BlackHat planning (try to sneak a hostile talk against Cloudflare past REDACTED[2] who btw is also advising Mr. Robot). It's lobbying at its finest.
[0] https://scotthelme.co.uk/tls-conundrum-and-leaving-cloudflar...
[1] https://blog.torproject.org/blog/trouble-cloudflare
EDIT: [2] redacted name since there is more than one, please duckduckgo by yourself.
* Longtime repeat speaker at Black Hat
* Repeat review board member (including this year's), and
* Extreme skeptic of Cloudflare's
I do not believe this is true. If you have a talk that is on topic for Black Hat and is harmful to Cloudflare, you'll get accepted. There's no one person who screens Black Hat talks; it's a panel of people, with several of the longstanding members of that panel (I'm not one of those) being more or less unimpeachable (Mark Dowd, Chris Eagle, Alex Sotirov, Dino Dai Zovi). None of these people are in the tank for Cloudflare. In fact: for most of the review board, none of them give a shit about Cloudflare.
The process isn't perfectly transparent! But it's such that if you submitted a talk, and it got shitcanned before reviewers even saw it, and you made a stink about it on Twitter, people would notice.
I generally agree with your assessment of Cloudflare as a threat to the Internet, for what it's worth. I just don't think you're right that they've gamed Black Hat.
I can see that you are not happy with what they provide. Luckily theirs service is not forced on you. Neither do you have to use it, nor visit server that use it.
Non-volumetric attacks like SYN or HTTP floods can be mitigated with appropriate rate limiting or firewalling.
Some providers like OVH have decent network-level mitigation in place, but you're not gonna find that on a $5 VPS where they're more than happy to null route you to protect their network.
Some syn floods can generate millions of packets per second, which is way more than a dedicated linux server can handle.
Good video on the topic:
https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June20...
AWS DDoS defense using rate based blacklisting
https://blogs.aws.amazon.com/security/post/Tx1ZTM4DT0HRH0K/H...
DDoS protection providers offer a remote solution to protect any server / network, anywhere: https://sharktech.net/remote-network-ddos-protection.php
B) make sure your servers don't fall over while getting full line rate of garbage incoming (this is not hard for reflection or synfloods, but is difficult if they're hitting real webpages, and very difficult if it includes a tls handshake)
C) bored ddos kiddies tend to ddos www only, so put your important things on other sub domains
D) hope you don't attract a dedicated attacker
This is one of the reasons I would consider managed hosting as opposed to AWS, Digital Ocean, etc. With any good managed hosting provider, they are going to take steps to help deal with the DDoS. Depending on your level of service and the level of the attack, of course. But they will have an interest in helping you deal with and mitigate the attacks.
The reality is that true DDoS solutions are expensive, and if you have a "small website" then you're probably not going to be able to afford them. But if you're at a good sized hosting provider, they're going to need to have these solutions themselves and can hopefully put them to use to protect your site.
Verisign and others offer this service; typically using DNS. However often they support BGP
2. Add limiting factors; if you have an abusive customer rate limit them in nginx. If you are expecting a heavy day rate limit the whole site.
3. Stress testing and likely designing your website to withstand DDoS attacks.
You can cache or not cache; that's not really the question. Handling a DDoS means what can you do to mitigate the extreme amount of traffic and still allow everything else to work.
http://www.linuxjournal.com/content/back-dead-simple-bash-co...
If you do piss anyone off, keep records of everything. Make sure you know who they are, and where they live, before you start doing business with them. This lets you send the police after they hire someone to DDoS you. Bad people need to be removed from the pool to reduce these sorts of attacks. Record 100% of your phone calls. Android has free apps to do this for you automatically. If you're in a state that requires 2-party authorization, move to a state that offers 1-party authorization. Sanity in laws = freedom of citizens.
http://www.level3.com/~/media/files/brochures/en_secur_br_dd...
Thinking like an attacker, wouldn't the most effective DoS be to find a CPU or memory intensive part of an application and use a small amount of bandwidth to create a large impact?
L7 attacks can be scrubbed by the same infrastructure. Beyond that, it's all a matter of detection. The computational expense of L7 inspection can be mitigated by sampling or scaled with ECMP. You may see a "WAF" (Web Application Firewall) enter the picture at this level.
How has been your experience with it?
Random "wisdom", not in any particular order more like do's and dont's that I picked up with dealing with and executing DoS/DDoS attacks.
Testing, testing, testing, regardless of how you choose and what you implement your mitigation test it and test it well because there are a lot of things you need to know.
Know and understand exact effect that the DDOS/DoS mitigation has, the leakage rate, what attacks can still bring you down, and the cost of mitigation.
Make sure you do the testing at different hours of the day if not you better know your application and every business process very well because I've seen cases where 50GB/s DDoS would do absolutly nothing except on tuesday and sunday at 4AM when some business batch process would start and the leakage from the DoS attack + the backend process would be enough to kill the system. Common processed that can screw you over are backups, site to site or co-location syncs/transfers, various database wide batches, pretty common times for this anything in early morning, end of weak, end of month, end of quarter etc.
If you are using load or stress testing tools on your website make sure to turn off compression it's nice that you can handle 50,000 users that all use GZIP but the attackers can choose not too.
Understand what services your website/service relies on for operation common things are services like DNS, SMTP etc. if I can kill your DNS server people can't access your website, if i can kill services that are needed for the business aspect of your service to function like SMTP I'm effectively shutting you down also.
If you are hosting your service on Pay As You Go hosting plans make sure to implement a billing cap and a lot of loud warnings, your site going down might not be fun, but it's less fun to wake up to a 150K bill in the morning, if you are a small business DoD/DDoS can result in very big financial damages that can put you out of business.
Understand exactly how many resources each "operation" on your website or API costs in terms of memory, disk access/IOP's, networking, DB calls etc, this is critical to know where to implement throttling and by how much.
If you implement throttling always do it on the "dumber" layer and the layer that issues the request for example if you want to limit the amount of DB queries you execute per minute to 1000 do it on the application server not on the DB server. This is both because you always want to use "graceful" throttling which means the requesters chooses not to make a request rather than the responder having not to respond, and it also allows you to implement selective throttling for example you might want to give higher priority to retrieving data of existing users than to allow new users to sign up or vice versa.
Do not leak IP address this is both in regards to load balancing and using scrapping services like Cloudflare. When you used services like cloudflare make sure that the services you protect are not accessible directly, make sure some one can't figure out the IP address of your website/API endpoint by simply looking at the DNS records. Common pitfalls are www.mysite.com -> cloudflare IP while mysite.com/www1.mysite.com/somerandomstuff.mysite.com reveal the actual IP address. Another common source is having your IP address revealed via hard coded URLs on your site or within the SDK/documentation for your API. If you have moved to cloudflare "recently" make sure that the IP address of your services is not recorded somewhere there are many sites that show historic values for DNS records if you can it is recommended to rotate your IP addresses once you sign up for a service like cloudflare and in any case make sure you block all requests that do not come through cloudflare.
When you do load balancing do it properly do not rely on DNS to for LB/round robin if you have 3 front end servers do not return 3 IP addresses when some one asks whois www.mysite.com put a load balancer infront of them and return only 1 IP address. Relying on DNS for round robin isn't smart it never works that well and you are allowing the attacker to focus on each target individually and bring your servers one by one.
Do not rely on IP blacklisting and for whatever reason do not ever ever ever use "automated blacklisting" regardless of what your DDoS mitigation provider is trying to tell you. If you only service a single geographical region e.g. NA, Europe, or "Spain" you can do some basic geographical restrictions e.g. limit access from say India or China this might not be possible if you are say a bank or an insurance provider and one of your customers has to access it from abroad. Ironically this impacts the sites and services that are the easiest to optimize for regional blocking for example if you only operate in france you might say ha! I'll block all non-french IP address but this means that what an attacker needs to do is simply use IP spoofing and go over the entire range of French ISP's and you blacklist all of France this only takes a few minutes to achieve! If you are blacklisting commercial service provider IP's make sure you understand what impact can it have on your site, blacklisting DigitalOcean or AWS might be easy but then don't be surprised when your mass mail services or digital contract services stop working. If you do use some blacklisting / geoblocking use a single list that you maintain do not just select "China" in your scrapping service, firewall, router, and WAF all of them can have different Chinas which causes inconsistent responses, use a custom list and know what is in it.
Do not whitelist IP! I've seen way too many organizations that whitelist IPs so those IPs would not go for example through their CDN/Scrapping service or would be whitelisted on whatever "Super Anti DDoS Appliance" the CISO decided to buy into this month. IP spoofing is easy! drive by attacks are easy! And since a common IPs to whitelist are things like your corporate internet connection nothing is easier for an attack to do than to figure those out. They simply need to google for the network blocks assigned to your organization if you are big enough and or were incorporated prior to 2005 or send a couple of 1000's of phishing emails and get do some sniffing from the inside.
Understand collateral damage and drive by attacks. Know who (if) you share your IP addresses with and figure out how likely they are to be attacked, yes everyone would piss some one with keyboard access these days but there are plenty of types of businesses that are more common as targets, if you are hosting in a datacenter that also provides hosting for a lot of DDoS targets you might suffer also. For drive by attacks you need to have good understanding of the syndication of your service and if you are a B2B service provider your customers. If you provide some embedded widget to other sites if they are being DDoSed you might get hit also if it is a layer 7 attack. If you are providing service for businesses for example an address validation API you might get hammered if one of your clients is being DDoSed and the attacker is hitting their sign up pages.
Optimize your website; remove or transfer large files things like documents and videos can be moved to various hosting providers (e.g. YouTube) or CDN's, if you are hosting large files on CDN's make sure they are only accessible via the CDN, infact for the most part it's best if you make sure that what is hosted on the CDN is only accessible via the CDN this prevents attackers from accessing the resources on your own servers via selecting your IP instead of the CDN. A common pitfall would be that some large file is linked on your website as cdn1.mysite.com/largefile but it's also accessible directly from your servers via www.mysite.com/largefile.
Implement anti-scripting techniques on your website, captcha, DOM rendering (makes it very expensive for the attacker to execute layer 7 attacks if they need to render the DOM to do so) and make sure that every "expensive" operation is protected with some sort of anti-scripting mechanism. Test this! captchas that are poorly implemented are no good, and I don't mean captchas that are somehow predictable or easy to read with CV's if you have a services that looks like this LB>Web Frontend>Application Server>DB make sure that the captcha field is the 1st thing that is being validated and make sure it's validated in the web frontend or even in the LB/Reverse Proxy. If you hit the application server validate all the fields do the thing and just before sending it to the DB you validate the captcha this won't help to protect you against DoS/DDoS as well if at all.
When you implement any mitigation design it well and understand leakage and "graceful failure", it's better for the dumb parts of your service to die and restart than it is for the more complicated parts. For example if after all of your mitigation you still have 10% leakage from your anti-ddos/scraping service to your web frontend and from it there is a 5% leakage to to your DB do not scale the web frontend to compensate for the leakage from your scrapping service to the point of putting your DB at risk. A web server going down is mostly a trivial thing as it would bring itself back up usually on its own without any major issues, if your DB gets hammered well it's a completely different game you do not want to run out of memory or disk and to have to deal with cache or transaction log corruption or consistency issues on the DB. Just get used to the fact that no matter what you are going to do and implement if some one wants to bring you down they will, do what you can and is economical to you do mitigate against certain attacks and for the reset design your service with predicted points of failure that would recover on their own in the most graceful manner and shortest period.
Also in some cases you want to disable accepting GZIP on the server side completely because if you accept GZIP encoded requests an attacker can send very large requests that compress very well forcing you to decompress them eating up a lot of memory and CPU cycles on your side only to discard them. In principle you want to accept only non-compressed requests but send compressed responses to save bandwidth, but in any case you want to know how your service/application scaling works in with all cases and combinations.
* If you can't CDN all your traffic, a CNAME with low TTL that can quickly switch to a CDN/WAF endpoint can be helpful.
* AWS, Azure and GCP all have mitigations for L3 attacks built into their infrastructure. Because you don't know how they operate, or when, don't rely on them. Accept they may break your service and be prepared to have downtime or the means to shift your product quickly if an attack is big enough or presses enough secret buttons.
* Identify and remove all potential means of amplification both at networking/infra and application. This means not exposing your own nameservers or NTP servers publicly, for L7 this is more complicated as it'll depend on how your APIs and products interact with themselves and each other.
* Load test your products often to know what breaking point is and when performance regressions arise with a given amount of resources allocated. Fixing these early may mean you can ride out a DDoS without needing to do anything if it's small enough and your application efficient enough.