europe-west-9 (Paris) has been physically flooded with water somehow and is hard down. This is obviously bad if you're using the region in question, but has zero impact elsewhere. https://status.cloud.google.com/incidents/dS9ps52MUnxQfyDGPf...
There is a separate issue stopping changes to HTTP load balancers across most of GCP, but it has no impact on serving and they're rolling out a fix already. https://status.cloud.google.com/incidents/uSjFxRvKBheLA4Zr5q...
Anyone experience with losing an entire DC to flooding?
edit: I just Googled it (lol) and this DC has to be brand spanking new (https://cloud.google.com/blog/products/infrastructure/google...), apparently they just opened it last June. Google must be livid with the contractors who built the place for it to get flooded so soon.
Our DC was intact, but the building and access was cut-off. We lost the backup diesel power generators in the flooding. Of course, grid power was cut-off.
Our DC operating team managed to shutdown all the servers and racks cleanly before UPS power was completely drained. The 4 engineers and 2 security guards then swam out of the compound in chest high waters. (I am not kidding).
When the rains subsided and the flood waters receded after a couple of days, we had to plan the restart. The facility still had to be certified by health and safety, but we needed to get the datacenter back up.
A secondary operations site that would remote-connect to the DC was brought up in 1 week since we estimated the rains to potentially continue for a few more days and cause interruptions. But the critical item for the plan to work was getting a new backup power setup. We rolled in a truck-mounted diesel generator and positioned it in the highest point in the campus (also close to our building tower that had the DC) and ran power cables to it (we had to source this and it was a challenge to do it with the time crunch and the rains).
We moved staff to other cities by bus (airport was shutdown) as part of our recovery plan, but we still needed connectivity to our DC for some of the critical processes.
Long story short, it worked.
I'll never forget the experience and the scars from this war story.
"Servers are down, I'll head over to the DC" turned into "Um... it's raining _in the DC_. Get me some tarps and get us cut over to the backup in the office".
Ah, the glory days of running out of a single co-lo across the parking lot with our "backup site" being a former broom closet.
Restoration is hard when health and safety are in question. Good luck to these ops folks <3
[1] https://www.datacenterknowledge.com/archives/2008/06/01/expl...
It was before dam (1) was built and floods were a huge problem in SPB
I wonder how many inches/feet we're talking here? The hardware on the top (unless it experienced electrical short) is most likely fine?
> Customer using Cloud Console globally are unable to open and view the Compute Engine related pages like: Instance creation page Disk creation page Instance templates page Instance Groups page
https://status.cloud.google.com/incidents/dS9ps52MUnxQfyDGPf...
Is it me, or has Google had issues with pushing changes to load balancers pretty much every few months for the past decade? Even before GCP launched, people here on HN sometimes said an outage was extended because load balancer configs couldn't be changed.
Have they not considered just redesigning their config push mechanism...
(Why not just the switches for the DC(s) your VPC is in? Because GCLB IP addresses are anycast addresses, with BGP peers routing them to their nearest Google POP, at which point Google's own backhaul — that's the "premium-tier networking" — takes over delivering your packets to the correct DC. Doing this requires all of Google's POP edge switches to know that a given GCLB-netblock IP address is currently claimed by "a project in DC X", in order to forward the anycast packets there.)
To ensure consistency between deployed GCLB config versions across this huge distributed system — and to avoid that their switches constantly being interrupted by config changes — it would seem to me that at least one — but as many as four — of the following mechanisms then take place:
1. some distributed system — probably something Zookeeper-esque — keeps global GCLB state, receiving virtual GCLB resource updates at each node and consensus-ing with the nodes in other regions to arrive at a new consistent GCLB state. Reaching this new consensus state across a globally-distributed system takes time, and so introduces latency. (But probably very little, because the resources being referenced are all sharded to their own DCs, so the "consensus algorithm" can be one that never has to resolve conflicts, and instead just needs to ensure all nodes have heard all updates from all other nodes.)
2. Even after a consistent global GCLB state is reached, not every one of those new consistent global states get converted into a network-switch config file and pushed to all the POPs. Instead, some system takes a snapshot every X minutes of the latest consistent state of the global-GCLB-config-state system, and creates and publishes a network-switch config file for that snapshot state. This introduces variable latency. (A famous speedrunning analogy: you can do everything else to remediate your app problems as fast as you like, but your LB config update arrives at a bus stop, and must wait for the next "config snapshot" bus to come. If it just missed the previous bus, it will have to wait around longer for the next one.)
3. Even after the new network-switch config file is published, the switches might receive it, but only "tick over" into a new config file state on some schedule, potentially skipping some config-file states if they're received at a bad time. Or, alternately, the switches might themselves coordinate so that only when all switches have a given config file available, will any of them go ahead and "tick over" into that new config.
4. Finally, there is probably a "distributed latch" to ensure that all POPs have been updated with the config file that contains your updates, before the Google Cloud control plane will tell you that your update has been applied.
No matter which of these factors are at fault, it's a painfully long time. I've never seen a GKE GCLB Ingress resource take less than 7 minutes to acquire an IP address; sometimes, it takes as much as 17 minutes!
And while there's definitely some constant component to the time that this config rollout takes, there's also a huge variable component to it. At least one of #2, #3, or #4 must be happening; possibly multiple of them.
---
You might ask why load-balancer changes in AWS don't suffer from this same problem. AWS doesn't have nearly as complex a problem to solve, since AFAIK their ALBs don't give out anycast IPs, just regular unicast IPs that require the packets be delivered to the AWS DC over the public Internet. (Though, on the other hand, AWS CDN changes do take minutes to roll out — CloudFront at least distributed-version-latched for rollouts, and might be doing some of the other steps above as well.)
You might ask why routing changes in Cloudflare don't suffer from this same problem. I don't know! But I know that they don't give their tenants individual anycast IP addresses, instead assigning tenants to 2-to-3 of N anycast "hub" addresses they statically maintain; and then, rather than routing packets arriving at those addresses based purely on the IP, they have to do L4 (TLS SNI) or L7 (HTTP Host header) routing. Presumably, doing that demands "smart" switches; which can then be arbitrarily programmed to do dynamic stuff — like keeping routing rules in an in-memory read-through cache with TTLs, rather than depending on an external system to push new routing tables to them.
I am afraid this is not true. We have nothing in europe-west-9, but problem in this region caused global problem with Cloud Console, which hit us, because we were not able to use it for several hours.
Snippert from https://status.cloud.google.com/incidents/dS9ps52MUnxQfyDGPf...:
"Cloud Console: Experienced a global outage, which has been mitigated. Management tasks should be operational again for operations outside the affected region (europe-west9). Primary impact was observed from 2023-04-25 23:15:30 PDT to 2023-04-26 03:38:40 PDT."
Sounds like some global control plane related to instance management operations started returning errors once one region failed. Or perhaps it was just the UI frontend?
[1] https://status.cloud.google.com/incidents/BWK7QzFBmfaZ4iztke...
Warning FailedToCreateRoute 4m59s route_controller Could not create route fc61a148-b428-43fa-xxxx-xxxx 10.28.167.0/24 for node gke-xxx-xxx after 16.320065487s: googleapi: Error 503: INTERNAL_ERROR - Internal error. Please try again or contact Google Support.
Any facing something similar?Droplets Nuking Servers
Good reminder that downtime happens for many wild reasons, and you may want to take 30 seconds and set up a free website / API monitor with Heii On-Call [1] because we would have alerted you to either of these issues if they affected your app.
Really, a simple HTTP probe provides tremendous monitoring power. I already was telling people that it covered issues at the DNS, TCP, SSL certificate, load balancer, framework, and application layers. Now I will have to add “datacenter flood” as well :P
Best wishes to everyone working on europe-west-9.
[1] https://heiioncall.com/ (I recently helped build our HTTP probe background infrastructure in Crystal)
[1]: https://lafibre.info/datacenter/incendie-maitrise-globalswit...
I'm a long time customer and have only good things to tell so far.
> We expect general unavailability of the europe-west9 region.
Why would emergency shutdown of a single AZ lead to general unavailability of a region? Isn't that the point of multiple AZs?
> There is no current ETA for recovery of operations in the europe-west9 region at this time, but it is expected to be an extended outage
yikes
If so, that's ... not good.
https://dcmag.fr/breve-un-depart-dincendie-dans-un-batiment-...
have i been watching too much espionage media?