Suddenly, There's not half the files there used to be, And there's a milestone hanging over me The system crashed so suddenly.
I pushed something wrong What it was I could not say.
Now all my data's gone and I long for yesterday-ay-ay-ay.
Yesterday, The need for back-ups seemed so far away. I knew my data was all here to stay, Now I believe in yesterday.
--
From usenet
My comment on the situation: Online mirrors are fine, but calling them backup is a stretch of the imagination,since you must assume that an event can compromise all data within a domain (be it The Internet, or a physical location).
A true backup must be physically and logically separate.
what is a backup if not just a form of copy anyway?
Opening up scp/rsync and saying "our client only writes new files" is bad. Using a dedicated stream-writing interface over TLS is probably fine.
As for the other attack vector: segregating the admin credentials so that the stream-writing interface cannot be bypassed, yeah, fun. 2FA only gets you so far.
That doesn't stop it from being targeted by hackers. No amount of hindsight will save your backups unless they are in an offline cold storage somewhere protected by men-at-arms.
...after a thorough analysis of the now encrypted logs?
The little restic backup saved him. It pushed one copy of nonsense, but kept several revisions of the old data.
On a similar note: does anyone have any experience with mdisc? They seem like the perfect solution for long time storage for me at the moment.
* Just a rant and parody
Don't worry, they're doing their best to get rid of those too.
I find it really hard to have empathy for serious businesses who don’t have backups and are dependent on a single cloud.
Like for example if you are all in on AWS and do all your backups of your AWS systems to AWS then lose your account. Meh… your fault.
If you run a business then you have an absolute obligation to be able to instantly bring your business back up outside your primary hosting provider.
And if you’ve built all your infrastructure in a way that cannot be replicated outside that hosting provider then frankly that’s negligent.
All those AWS Lambda functions that talk to DynamoDB? Guess what…. none of that can be brought up elsewhere when you lose your AWS account.
If you are a CTO then this is your primary responsibility and priority above everything else. If you are a CTO who has failed to ensure your business can survive losing your cloud then you are a failed CTO.
I think that would be a loss.
Think: ssh cron job that copies backups from cloud to cold storage
The registry for the .DK TLD has published a page on what to do for those affected: <https://punktum.dk/en/faq/if-you-are-a-customer-at-cloudnord...>
What happened?
It is our best estimate that when servers had to be moved from one data center to another and despite the fact that the machines being moved were protected by both firewall and antivirus, some of the machines were infected before the move, with an infection that had not been actively used in the previous data center, and we had no knowledge that there was an infection.
During the work of moving servers from one data center to the other, servers that were previously on separate networks were unfortunately wired to access our internal network that is used to manage all of our servers.
Via the internal network, the attackers gained access to central administration systems and the backup systems.Were admin interfaces IP whitelisted only (no other auth)?
"CloudNordic could not be reached for comment." It's a journalist's job to reach either the company or affected customers to verify the facts.
I don't think we need fearmongering about "shoddy journalism" for something so easy to check.
Edit: in Danish not Norwegian, my apologies
The text is in Danish, since it is a Danish company. Norwegian and Danish is close, but some words have totally different meanings
The CEO of CloudNordic confirmed the attack took place and that backups were lost in an interview to Radio4.Dk, as reported by Computer Sweden.
Sucks this Danish cloud host provider didn't back stuff up properly.
More often than not in my experience is that the IT team wants proper backups but management baulk at the price and never authorize it. Until something bad happens of course.
Backups of the dataplane should of course exist.
maybe they were backing up their stuff properly, but backups were wiped as well. even if you have some fancy append-only storage someone still has access to it and that access can be misued.
Then they're not offline backups, are they? I know what you mean but backing up to a network drive with R/W is not a backup, it's a copy.
You realize this is contradictory?
There is no excuse for this.
Our industry is mostly run by clowns and unserious people.
It really depends on what SLAs they advertised.
A (national) local hosting company suffered a datacenter fire and lost pretty much all customer data except for billing (which was rebuilt as they were using third party payment processors).
After the fire the company just let people know there were no backups unless bought explicitly, as the terms of service clearly stated.
The company ia still operating (i know because I’m a customer) and not much has happened.
Yeah some customers were screwed, but it was the kind of ignorant customer that gullibility paid 10€/year for hosting (with php execution), database, domain, traffic and then also expected the same service level as something way more expensive. There’s not much around this: you get what you pay for.