We discovered this change recently because my dad was looking for a file that Dropbox accidentally overwrote which at first we said “no problem. This is why we pay for backblaze”
We had learned that this policy had changed a few months ago, and we were never notified. File was unrecoverable
If anyone at backblaze is reading this, I pay for your product so I can install you on my parents machine and never worry about it again. You decided saving on cloud storage was worth breaking this promise. Bad bad call
I need it to capture local data, even though that local data is getting synced to Google Drive. Where we sync our data really has nothing to do with Backblaze backing up the endpoint. We don't wholly trust sync, that's why we have backup.
On my personal Mac I have iCloud Drive syncing my desktop, and a while back iCloud ate a file I was working on. Backblaze had it captured, thankfully. But if they are going to exclude iCloud Drive synced folders, and sounds like that is their intention, Backblaze is useless to me.
The deal was that Backblaze backs things up and I don't have to worry about it. Learning that it does not back things up is a punch to the gut. I am familiar with the exclusions and I have a look at that list to make I'm not missing anything from my backups. I had always thought the exclusions list was exhaustive.
Excluding other files and folders without telling me about it breaks the deal. Dropbox is important to several of the users I installed it for. Ignoring .git folders is another one that affects me and I had not known about. Ouch.
I will now have to look for alternatives. It has to be easy to install, run seamlessly on non-technical users' machines and be reliable.
I find it hard to be think of a worse breach of trust for a backup service than not to back up files!
I also want to be clear, this wasn’t about saving on storage. It came from cases where backing up cloud-synced folders (like Dropbox) was leading to unreliable or incomplete restores because of how those files are managed under the hood.
When Dropbox began using reparse points for synced files, those files no longer behaved like standard local files. Because of that, Backblaze Computer Backup can’t reliably back them up or restore them. The current behavior is focused on ensuring we only back up data we can reliably restore, and we are actively exploring ways to better support Dropbox and data touched by other sync services.
It is true that we recently updated how Backblaze Computer Backup handles cloud-synced folders. This decision was driven by a consistent set of technical issues we were seeing at scale, most of them driven by updates created by third-party sync tools, including unreliable backups and incomplete restores when backing up files managed by third-party sync providers.
To give a bit more context on the “why”: these cloud storage providers now rely heavily on OS-level frameworks to manage sync state. On Windows, for example, files are often represented as reparse points via the Cloud Files API. While they can appear local, they are still system-managed placeholders, which makes it difficult to reliably back them up as standard on-disk files.
Moreover, we built our product in a way to not backup reparse points for two reasons:
1. We wanted the backup client to be light on the system and only back up needed user-generated files. 2. We wanted the service to be unlimited, so following reparse points would lead to us backing up tons of data in the cloud
We’ve made targeted investments where we can, for example, adding support for iCloud Drive by working within Apple’s model and supporting Google Drive, but extending that same level of support to third-party providers like Dropbox or OneDrive is more complex and not included in the current version.
We are currently exploring building an add-on that either follows reparse points or backs up the tagged data in another way.
We also hear you clearly on the communication gap. Both the sync providers and Backblaze should have been more proactive in notifying customers about a change with this level of impact. Please don't hesitate to reach out to me or our support team directly if you have any questions. https://help.backblaze.com/hc/en-us/requests/new
We are here to help.
This is another example in disguise of two people disagreeing about what "unlimited" means in the context of backup, even if they do claim to have "no restrictions on file type or size" [2].
[1] https://www.reddit.com/r/backblaze/comments/jsrqoz/personal_... [2] https://www.backblaze.com/cloud-backup/personal
Always prefer businesses who are upfront and honest about what they can offer their users, in a sustainable way.
Or that they're targeting the mass retail market, where people are technically ignorant, and "unlimited" is required to compete.
And statistically-speaking, is viable as long as a company keeps its users to a normal distribution.
But we'd always have a few people at the end of the semester print 493 blank pages using up all of their print quota they'd "paid for". No sir, you didn't pay for 500 pages of printing a semester, we'd let you print as much as you needed, we just had to put a quota in place to prevent some joker from wallpapering the lecture hall.
It was hard to express what we meant and "unlimited" didn't cut it.
so it’s an even more frustrating misleading statement.
The new and very interesting problem with their business model is that drive prices have doubled - and in some cases, more than doubled - in the last 12 months.
Backblaze has a lot of debt and at some point the numbers don't make sense anymore.
Oh well, I guess this is why we're given two kidneys.
If a company uses the word unlimited to describe their service, but then attempts to weasel out of it via their T&Cs, that doesn't constitute a disagreement over the meaning of the word unlimited. It just means the company is lying.
Unlimited however, they can offer. I don’t see how people get into mental block of thinking something is nefarious when a company offers you unlimited hosting or data. Yes, they know it’s impossible if everyone took full advantage of that. They also know most people won’t and so they don’t have to spend time worrying about it. It’s a simple actuarial exercise to work out the pricing that covers the use of your users.
Back in the early 2000s I ran a web hosting service that was predominantly a LAMP stack shared hosting environment. It had several unlimited plans and they were easy to estimate/price. The only times I had an issue of supporting a heavy user, it would turn out they were doing something unrestricted. Back then, it was usually something pron or mp3 related. So the user would get kicked off for that. I didn’t have any issues with supporting the usage load if it was within TOS. The margins were so high it was almost impossible to find a user that could give me any trouble from an economic standpoint.
- report an error
- ignore
- materialize
Regardless, if you make it back up software that doesn’t give this level of control to users, and you make a change about which files you’re going to back up, you should probably be a lot more vocal with your users about the change. Vanishingly few people read release notes.
Making the change without making it clear though, that's just awful. A clear recipe for catastrophic loss & drip drip drip of news in the vein of "How Backblaze Lost my Stuff"
Hiding the network always ends in pain. But never goes out of style.
Hell, if I open a directory of photos and my OS tries to pull exif data for each one, it would be wild if that caused those files to be fully downloaded and consume disk space.
After a backup, you’d go out to a coffee shop or on a plane only to find that the files in the synced folder you used yesterday, and expected to still be there, were not - but photos from ten years ago were available!
It would be reasonable to say that if you run the file sync in a mode that keeps everything locally, then Backblaze should be backing it up. Arguably they should even when not in that mode, but it'll churn files repeatedly as you stream files in and out of local storage with the cloud provider.
When you have a couple terabytes of data in that drive, is it acceptable to cycle all that data and use all that bandwidth and wear down your SSD at the same time?
Also, high number of small files is a problem for these services. I have a large font collection in my cloud account and oh boy, if I want to sync that thing, the whole thing proverbially overheats from all the queries it's sending.
And, as a separate note, they shouldn't be balking at the amount of data in a virtualized onedrive or dropbox either considering the user could get a many-terabyte hard drive for significantly less money.
The moment you call read() (or fopen() or your favorite function), the download will be triggered. It's a hook sitting between you and the file. You can't ignore it.
The only way to bypass it is to remount it over rclone or something and use "ls" and "lsd" functions to query filenames. Otherwise it'll download, and it's how it's expected to work.
Your backup solution is not something you ever want to be the source of surprises!
I don't quite understand why it's still like this; it's probably the biggest reason why git tends to play poorly with a lot of filesystem tools (not just backups). If it'd been something like an SQLite database instead (just an example really), you wouldn't get so much unnecessary inode bloat.
At the same time Backblaze is a backup solution. The need to back up everything is sort of baked in there. They promise to be the third backup solution in a three layer strategy (backup directly connected, backup in home, backup external), and that third one is probably the single most important one of them all since it's the one you're going to be touching the least in an ideal scenario. They really can't be excluding any files whatsoever.
The cloud service exclusion is similarly bad, although much worse. Imagine getting hit by a cryptoworm. Your cloud storage tool is dutifully going to sync everything encrypted, junking up your entire storage across devices and because restoring old versions is both ass and near impossible at scale, you need an actual backup solution for that situation. Backblaze excluding files in those folders feels like a complete misunderstanding of what their purpose should be.
Ironically, I believe you have it backwards: pack files, git's solution to the "too many tiny files" problem, are the issue here; not the tiny files themselves.
In my experience, incremental backup software works best with many small files that never change. Scanning is usually just a matter of checking modification times and moving on. This isn't fast, but it's fast enough for backups and can be optimized by monitoring for file changes in a long-running daemon.
However, lots of mostly identical files ARE an issue for filesystems as they tend to waste a lot of space. Git solves this issue by packing these small objects into larger pack files, then compressing them.
Unfortunately, it's those pack files that cause issues for backup software: any time git "garbage collects" and creates new pack files, it ends up deleting and creating a bunch of large files filled with what looks like random data (due to compression). Constantly creating/deleting large files filled with random data wreaks havoc on incremental/deduplicating backup systems.
Why should a file backup solution adapt to work with git? Or any application? It should not try to understand what a git object is.
I’m paying to copy files from a folder to their servers just do that. No matter what the file is. Stay at the filesystem level not the application level.
It's that to back up a folder on a filesystem, you need to traverse that folder and check every file in that folder to see if it's changed. Most filesystem tools usually assume a fairly low file count for these operations.
Git, rather unusually, tends to produce a lot of files in regular use; before packing, every commit/object/branch is simply stored as a file on the filesystem (branches only as pointers). Packing fixes that by compressing commit and object files together, but it's not done by default (only after an initial clone or when the garbage collector runs). Iterating over a .git folder can take a lot of time in a place that's typically not very well optimized (since most "normal" people don't have thousands of tiny files in their folders that contain sprawled out application state.)
The correct solution here is either for git to change, or for Backblaze to implement better iteration logic (which will probably require special handling for git..., so it'd be more "correct" to fix up git, since Backblaze's tools aren't the only ones with this problem.)
That's a really important fact that's getting buried so I'd like to highlight it here.
This is a joke, but honestly anyone here shouldn't be directly backing up their filesystems and should instead be using the right tool for the job. You'll make the world a more efficient place, have more robust and quicker to recover backups, and save some money along the way.
See Fossil (https://fossil-scm.org/)
P.S. There's also (https://www.sourcegear.com/vault/)
> SourceGear Vault Pro is a version control and bug tracking solution for professional development teams. Vault Standard is for those who only want version control. Vault is based on a client / server architecture using technologies such as Microsoft SQL Server and IIS Web Services for increased performance, scalability, and security.
It's the same reason why the postgres autovacuum daemon tends to be borderline useless unless you retune it[0]: the defaults are barmy. git gc only runs if there's 6700 loose unpacked objects[1]. Most typical filesystem tools tend to start balking at traversing ~1000 files in a structure (depends a bit on the filesystem/OS as well, Windows tends to get slower a good bit earlier than Linux).
To fix it, running
> git config --global gc.auto 1000
should retune it and any subsequent commit to your repo's will trigger garbage collection properly when there's around 1000 loose files. Pack file management seems to be properly tuned by default; at more than 50 packs, gc will repack into a larger pack.
[0]: For anyone curious, the default postgres autovacuum setting runs only when 10% of the table consists of dead tuples (roughly: deleted+every revision of an updated row). If you're working with a beefy table, you're never hitting 10%. Either tune it down or create an external cronjob to run vacuum analyze more frequently on the tables you need to keep speedy. I'm pretty sure the defaults are tuned solely to ensure that Postgres' internal tables are fast, since those seem to only have active rows to a point where it'd warrant autovacuum.
I contacted the support asking WTF, "oh the file got deleted at some point, sorry for that", and they offered me 3 months of credits.
I do not trust my Backblaze backups anymore.
First thing I noticed is that if it can't download a file due to network or some other problem then it just skips it. But you can force it to retry by modifying its job file which is just an SQLite DB. Also it stores and downloads files by splitting them into small chunks. It stores checksums of these chunks, but it doesn't store the complete checksum of the file, so judging by how badly the client is written I can't be sure that restored files are not corrupted after the stitching.
Then I found out that it can't download some files even after dozens of retries because it seems they are corrupted on Backblaze side.
But the most jarring issue for me is that it mangled all non-ascii filenames. They are stored as UTF-8 in the DB, but the client saves them as Windows-1252 or something. So I ended up with hundreds of gigabytes of files with names like фикац, and I can't just re-encode these names back, because some characters were dropped during the process.
I wanted to write a script that forces Backblaze Client to redownload files, logs all files that can't be restored, fixes the broken names and splits restored files back into chunks to validate their checksums against the SQLite DB, but it was too big of a task for me, so I just procrastinated for 3 years, while keeping paying monthly Backblaze fees because it's sad to let go of my data.
I wonder if they fixed their client since then.
> I wanted to write a script that forces Backblaze Client to redownload files, logs all files that can't be restored, fixes the broken names and splits restored files back into chunks to validate their checksums against the SQLite DB, but it was too big of a task for me, so I just procrastinated for 3 years, while keeping paying monthly Backblaze fees because it's sad to let go of my data.
Filenames are probably the most valuable of metadata for them to mangle. I value them as much as I do file creation/modification times. A backup program is dead to me if they mess up either of these.
I think it should be trivial for you to pipe your request into Claude now, and get them to write a quick script. Hope that'll free you from Backblaze for good!
> I wonder if they fixed their client since then.
They have not. I spent more than a week trying to restore a little less than 2 TB backup because the client would just freeze at the last few % every time. I ended up having to break the restore into 200GB chunks on the web client and download and restore manually which was extremely frustrating and made me despise their (required) Windows client.- - -
Hey, I tried restoring a file from my backup — downloading it directly didn't work, and creating a restore with it also failed – I got an email telling me contract y'all about it.
Can you explain to me what happened here, and what can I do to get my file(s?) back?
- - -
Hi Jan,
Thanks for writing in!
I've reached out to our engineers regarding your restore, and I will get back to you as soon as I have an update. For now, I will keep the ticket open.
- - -
Hi Jan,
Regarding the file itself - it was deleted back in 2022, but unfortunately, the deletion never got recorded properly, which made it seem like the file still existed.
Thus, when you tried to restore it, the restoration failed, as the file doesn't actually exist anymore. In this case, it shouldn't have been shown in the first place.
For that, I do apologize. As compensation, we've granted you 3 monthly backup credits which will apply on your next renewal. Please let me know if you have any further questions.
- - -
That makes me even more confused to be honest - I’ve been paying for forever history since January 2022 according to my invoices?
Do you know how/when exactly it got deleted?
- - -
Hi Jan,
Unfortunately, we don't have that information available to us. Again, I do apologize.
- - -
I really don’t want to be rude, but that seems like a very serious issue to me and I’m not satisfied with that response.
If I’m paying for a forever backup, I expect it to be forever - and if some file got deleted even despite me paying for the “keep my file history forever” option, “oh whoops sorry our bad but we don’t have any more info” is really not a satisfactory answer.
I don’t hold it against _you_ personally, but I really need to know more about what happened here - if this file got randomly disappeared, how am I supposed to trust the reliability of anything else that’s supposed to be safely backed up?
- - -
Hi Jan,
I'll inquire with our engineers tomorrow when they're back in, and I'll update you as soon as I can. For now, I will keep the ticket open.
- - -
Appreciate that, thank you! It’s fine if the investigation takes longer, but I just want to get to the bottom of what happened here :)
- - -
Hi Jan,
Thanks for your patience.
According to our engineers and my management team:
With the way our program logs information, we don't have the specific information that explains exactly why the file was removed from the backup. Our more recent versions of the client, however, have vastly improved our consistency checks and introduced additional protections and audits to ensure complete reliability from an active backup.
Looking at your account, I do see that your backup is currently not active, so I recommend running the Backblaze installer over your current installation to repair it, and inherit your original backup state so that our updates can check your backup.
I do apologize, and I know it's not an ideal answer, but unfortunately, that is the extent of what we can tell you about what has happened.
- - -
I gave up escalating at this point and just decided these aren’t trusted anymore.
The files in question are four year old at this point so it’s hard for me conclusively state, so I guess there might be a perfect storm of that specific file being deleted because it was due to expire before upgraded to “keep history forever”, but I don’t think it’s super likely, and I absolutely would expect them to have telemetry about that in any case.
If anyone from Backblaze stumbles upon it and wants to escalate/reinvestigate, the support ID is #1181161.
https://utcc.utoronto.ca/~cks/space/blog/sysadmin/BackupTest...
You should naturally test your ordinary restore procedures (single-file, one directory, spot-checks) on the regular, and you should also form a viable disaster recovery plan, based on your projected risks. What if your house burns down? What if you're burglarized? What if your password manager loses all passwords? etc.
If you've never successfully run a disaster-recovery drill, then you don't have a plan.
"Those who fail to plan, plan to fail!"
> I contacted the support asking WTF, "oh the file got deleted at some point, sorry for that", and they offered me 3 months of credits.
This happened to me with CrashPlan for Windows many years ago, because of some Volume Shadow Copy Service thing. I noped out of there right after.
However, backing up these kinds of directories has always been ill-defined. Dropbox/Google Drive/etc. files are not actually present locally - at least not until you access the file or it resides to cache it. Should backup software force you to download all 1TB+ of your cloud storage? What if the local system is low on space? What if the network is too slow? What if the actually data is in an already excluded %AppData% location.
Similar issue with VCS, should you sync changes to .git every minute? Every hour? When is .git in a consistent state?
IMO .git and other VCS should just be synced X times per day and it wait for .git to be unchanged for Y minutes before syncing it. Hell, I bet Claude could write a special Git aware backup script.
But Google Drive and Dropbox mount points are not real. It’s crazy to expect backup software to handle that unless explicitly advertised.
Eh, I don't agree. Case in point: Microsoft.
Or in other words: a sucker is born every minute.
</bzexclusions><excludefname_rule plat="mac" osVers="*" ruleIsOptional="f" skipFirstCharThenStartsWith="*" contains_1="/users/username/dropbox/" contains_2="*" doesNotContain="*" endsWith="*" hasFileExtension="*" />
That is the exact path to my Dropbox folder, and I presume if I move my Dropbox folder this xml file will be updated to point to the new location. The top of the xml file states "Mandatory Exclusions: editing this file DOES NOT DO ANYTHING".
.git files seem to still be backing up on my machine, although they are hidden by default in the web restore (you must open Filters and enable Show Hidden Files). I don't see an option to show hidden files/folders in the Backblaze Restore app.
That would be nice, they'd be able to get their history back!
Try checking bzexcluderules_editable.xml. A few years ago, Backblaze would back up .git folders for Mac but not Windows. Not sure if this is still the case.
The one thing they have to do is backup everything and when you see it in their console you can rest assured they are going to continue to back it up.
They’ve let the desktop client linger, it’s difficult to add meaningful exceptions. It’s obvious they want everyone to use B2 now.
Borg backup is a good tool in my opinion and has everything that I need (deduplication, compression, mountable snapshot.
Hetzner Storage Box is nothing fancy but good enough for a backup and is sensibly cheaper for the alternatives (I pay about 10 eur/month for 5TB of storage)
Before that I was using s3cmd [3] to backup on a S3 bucket.
It is true that we recently updated how Backblaze Computer Backup handles cloud-synced folders. This decision was driven by a consistent set of technical issues we were seeing at scale, most of them driven by updates created by third-party sync tools, including unreliable backups and incomplete restores when backing up files managed by third-party sync providers.
To give a bit more context on the “why”: these cloud storage providers now rely heavily on OS-level frameworks to manage sync state. On Windows, for example, files are often represented as reparse points via the Cloud Files API. While they can appear local, they are still system-managed placeholders, which makes it difficult to reliably back them up as standard on-disk files.
Moreover, we built our product in a way to not backup reparse points for two reasons:
1. We wanted the backup client to be light on the system and only back up needed user-generated files. 2. We wanted the service to be unlimited, so following reparse points would lead to us backing up tons of data in the cloud
We’ve made targeted investments where we can, for example, adding support for iCloud Drive by working within Apple’s model and supporting Google Drive, but extending that same level of support to third-party providers like Dropbox or OneDrive is more complex and not included in the current version.
We are currently exploring building an add-on that either follows reparse points or backs up the tagged data in another way.
We also hear you clearly on the communication gap. Both the sync providers and Backblaze should have been more proactive in notifying customers about a change with this level of impact. Please don't hesitate to reach out to me or our support team directly if you have any questions. https://help.backblaze.com/hc/en-us/requests/new
We are here to help.
Basically it works like this:
- I have syncthing moving files between all my devices. The larger the device, the more stuff I move there[2]. My phone only has my keepass file and a few other docs, my gaming PC has that plus all of my photos and music, etc.
- All of this ends up on a raspberry pi with a connected USB harddrive, which has everything on it. Why yes, that is very shoddy and short term! The pi is mirrored on my gaming PC though, which is awake once every day or two, so if it completely breaks I still have everything locally.
- Nightly a restic job runs, which backs up everything on the pi to an s3 compatible cloud[3], and cleans out old snapshots (30 days, 52 weeks, 60 months, then yearly)
- Yearly I test restoring a random backup, both on the pi, and on another device, to make sure there is no required knowledge stuck on there.
This is was somewhat of a pain to setup, but since the pi is never off it just ticks along, and I check it periodically to make sure nothing has broken.
[1] there is always weirdness with these tools. They don't sync how you think, or when you actually want to restore it takes forever, or they are stuck in perpetual sync cycles
[2] I sync multiple directories, broadly "very small", "small", "dumping ground", and "media", from smallest to largest.
[3] Currently Wasabi, but it really doens't matter. Restic encrypts client side, you just need to trust the provider enough that they don't completely collapse at the same time that you need backups.
I still trust restic checksums will actually check whether restore is correct, but that way random part of storage gets tested every so often in case some old pack file gets damaged
Props for getting this implemented and seemingly trusted... I wish there was an easier way to handle some of this stuff (eg: tiny secure key material => hot syncthing => "live" git files => warm docs and photos => cold bulk movies, isos, etc)... along with selective "on demand pass through browse/fetch/cache"
They all have different policy, size, cost, technical details, and overall SLA/quality tradeoffs.
~ 5 years ago, I had a development flow that involved a large source tree (1-10K files, including build output) that was syncthing-ed over a residential network connection to some k8s stuff.
Desyncs/corruptions happened constantly, even though it was a one-way send.
I've never had similar issues with rsync or unison (well, I have in unison, but that's two-way sync, and it always prompted to ask for help by design).
Anyway, my decade-old synology is dying, so I'm setting up a replacement. For other reasons (mostly a decade of systemd / pulse audio finding novel ways to ruin my day, and not really understanding how to restore my synology backups), I've jumped ship over to FreeBSD. I've heard good things about using zfs to get:
saniod + syncoid -> zfs send -> zfs recv -> restic
In the absence of ZFS, I'd do:
rsync -> restic
Or:
unison <-> unison -> restic.
So, similar to what you've landed on, but with one size tier. I have docker containers that the phone talks to for stuff like calendars, and just have the source of the backup flow host my git repos.
One thing to do no matter what:
Write at least 100,000 files to the source then restore from backup (/ on a linux VM is great for this). Run rsync in dry run / checksum mode on the two trees. Confirm the metadata + contents match on both sides. I haven't gotten around to this yet with the flow I just proposed. Almost all consumer backup tools fail this test. Comments here suggest backblaze's consumer offering fails it badly. I'm using B2, but I haven't scrubbed my backup sets in a while. I get the impression it has much higher consistency / durability.
I had no idea that it was such a good bargain. I used to be a Crashplan user back in the day, and I always thought Backblaze had tiered limits.
I've been using Duplicati to sync a lot of data to S3's cheapest tape-based long term storage tier. It's a serious pain in the ass because it takes hours to queue up and retrieve a file. It's a heavy enough process that I don't do anything nearly close to enough testing to make sure my backups are restorable, which is a self-inflicted future injury.
Here's the thing: I'm paying about $14/month for that S3 storage, which makes $99/year a total steal. I don't use Dropbox/Box/OneDrive/iCloud so the grievances mentioned by the author are not major hurdles for me. I do find the idea that it is silently ignoring .git folders troubling, primarily because they are indeed not listed in the exclusion list.
I am a bit miffed that we're actively prevented from backing up the various Program Files folders, because I have a large number of VSTi instruments that I'll need to ensure are rcloned or something for this to work.
A big difference here is that Backblaze only keeps deleted/changed files for 30 days. Deleted files can go unnoticed for some time, especially if done by a malicious app or ignorant AI.
I'd pay that extra few dollars for peace of mind.
As for testing recovery, you can validate file counts, sizes + checksums without performing recovery.
A few shell scripts give you the power of advanced enterprise backup, whereas backblaze only supports GUI restores.
There are actually a lot of cheaper S3-compatible services out there, (like Backblaze B2, or Cloudflare R2). They pricing may work out to just backup to these directly. Certainly gives you far more control than Backblaze Backup.
I get that this is not a restorable image, but for $100 a year I'm not expecting that.
I never trust them again with my data.
Not backing up .git folders however is completely unacceptable.
I have hundreds of small projects where I use git track of history locally with no remote at all. The intention is never to push it anywhere. I don't like to say these sorts of things, and I don't say it lightly when I say someone should be fired over this decision.
I know this is besides the point somewhat, but: Learn your tools people. The commit history could probably have been easily restored without involving any backup. The commits are not just instantly gone.
Indeed, the commits and blobs might even have still been available on the GitHub remote, I'm not sure they clean them on some interval or something, but bunch of stuff you "delete" from git still stays in the remote regardless of what you push.
I've never needed to restore anything, so can't say anything about this, but once, one of my devices deleted a file in Syncthing, and I went into Backblaze to see if they have any logs of deletions/file modifications (had it disabled in syncthing).
I don't remember the exact details, but I remember clearly that I felt like the entire thing was done by a junior engineer straight out of college. Trying to understand the names of some variables used there, I stumbled upon a reddit thread where the person who worked on the client was trying to explain why things were done the way they were - and I felt like it was me in my first 3 months of software engineering.
How did Backblaze gain this trust in the first place? Is it because nobody is offering "unlimited" storage at the same price point?
Yes, the unlimited storage is one factor. Their detailed write ups about hard drive reliability and transparency on how they build their racks (I think they essentially opened sourced the design) established a lot of credibility as well. Plus in my experience they just worked.
I paid for Backblaze for years but finally cancelled when I junked my last desktop and never got around to installing it on my laptop. I did use their restore functionality a couple of times and it was slow and kind of clunky but it worked.
I’m sad to hear they’ve started dropping stuff from the backup like this. I’ve been contemplating signing back up but most of the stuff I care about is in iCloud or OneDrive so if they aren’t backing that up, it’s pretty useless to me.
After using their website and their app for a few hours I pretty much immediately decided to not proceed with them as the software was clearly not built by a team that has great competency in software development. This was a year ago so they've had plenty of time to polish it.
You sort of let that kind of stuff pass for a hardware company, but backblaze is not a hardware company. There's more to backup than just ensuring the disks at the data centre are replaced in a timely manner.
And from comments here I don't see any other user-friendly options being proposed, it's all just suggestions to glue together some open source software with object storage and be your own sysadmin.
But if that's truly their stance, then they are being deceptive about their non-business offering at the point of sale.
EDIT - see my other comment where I found the actual email
According to support's reply just now, my backups are crippled just like every other customer. No git, no cloud synced folders, even if those folders are fully downloaded locally.
(This is also my personal backup strategy for iCloud Drive: one Mac is set to fully download complete iCloud contents, and that Mac backs up to Backblaze.)
I, on the other hand, as a private consumer, use git for all my hobby projects and note-taking. And my language learning. Of course I do, or I couldn't keep track of what I'm doing over the years, and I wouldn't be able to sort things out. There's nothing professional there, are BB saying that if you try to do something in an orderly and controlled manner, then it's "professional" and shouldn't be backed up? If that's their stance then no wonder people are leaving BB. I for sure won't ever recommend them again.
That aged well...
You should try downloading one of your backed up git repos to see if it actually does contain the full history, I just checked several and everything looks good.
> Bob (Backblaze Help)
> Aug 5, 2021, 11:33 PDT
> Hello there,
> Thank you for taking the time to write in,
> Unfortunately .git directories are excluded by Backblaze by default. File
> changes within .git directories occur far too often and over so many files
> that the Backblaze software simply would not be able to keep up. It's beyond
> the scope of our application.
> The Personal Backup Plan is a consumer grade backup product. Unfortunately we
> will not be able to meet your needs in this regard.
> Let me know if you have any other questions.
> Regards,
> Bob The Backblaze Team
There's no mention of .git being excluded in the Settings or on their support page (https://www.backblaze.com/computer-backup/docs/supported-bac...); they just silently decided to not back up a bunch of my files without telling me... wonderful.
It seems incredibly stupid for a BACKUP PROGRAM to not list the hidden files instead of indicating they're hidden (e.g. _(hidden)_.git)
But .git? It does not mean you have it synced to GitHub or anything reliable?
If you do anything then only backup the .git folder and not the checkout.
But backing up the checkout and not the .git folder is crazy.
No they are not. This is explicitly addressed in the article itself.
You are using it to mean "maintaining full version history", I believe? Another important consideration.
both services have internal backups to reduce the chance they lose data
both services allow some limited form of "going back to older version" (like the article states itself).
Just because the article says "sync is not backup" doesn't mean that is true, I mean it literally is backup by definition as it: makes a copy in another location and even has versioning.
It's just not _good enough_ backup for their standards. Maybe even standards of most people on HN, but out there many people are happy with way worse backups, especially wrt. versioning for a lot of (mostly static) media the only reason you need version rollback is in case of a corrupted version being backed up. And a lot of people mostly backup personal photos/videos and important documents, all static by nature.
Through
1. it doesn't really fulfill the 3-2-1 rules it's only 2-1-1 places (local, one backup on ms/drop box cloud, one offsite). Before when it was also backed up to backblaze it was 3-2-1 (kinda). So them silently stopping still is a huge issue.
2. newer versions of the 3-2-1 rule also say treat 2 not just as 2 backups, but also 2 "vendors/access accounts" with the one-drive folder pretty much being onedrive controlled this is 1 vendor across local and all backups. Which is risky.
They don't need to be in my case, I'm only using them now because of existing shortcuts and VM shares and programs configured to source information from them. That doesn't mean I don't want them backed up.
Same for OneDrive: Microsoft configured my account for OneDrive when I set it up. Then I immediately uninstalled it (I don't want it). But I didn't notice that my desktop and documents folders live there. I hate it. But by the time I noticed it, it was already being used as a location for multiple programs that would need to be reconfigured, and it was easier to get used to it than to fix it. Several things I've forgotten about would likely break in ways I wouldn't notice for weeks/months. Multiple self-hosted servers for connecting to my android devices would need to reindex (Plex, voidtools everything, several remote systems that mount via sftp and connected programs would decide all my files were brand new and had never been seen before)
Complete lack of communication (outside of release notes, which nobody really reads, as the article too states) is incompetence and indeed worrying.
Just show a red status bar that says "these folders will not be backed up anymore", why not?
So my idea is that it's a competency problem (lack of communication), not malice. But it's just a theory, based on my own experience.
In any case, this is a bad situation, however you look at it.
Maybe there's something newer/better now (and I bought lifetime licenses of it long ago), but it works for me.
That said, I use Arq + Backblaze storage and I think my monthly bill is very low, like under $5. Though I haven't backed-up much media there yet, but I do have control over what is being backed-up.
I wish lifetime licences were still sold.
If you've got huge amounts of files in Onedrive and the backup client starts downloading everyone of them (before it can reupload them again) you're going to run into problems.
But ideally, they'd give you a choice.
(as a side note, it's funny to see see them promoting their native C app instead of using Java as a "shortcut". What I wouldn't give for more Java apps nowadays)
Edit: on top of that I've built a custom one-page monitoring dashboard, so I see everything in one place (https://imgur.com/B3hppIW) - I'll opensource, it's decent architecture, I just need to cleanup some secrets from Git history...
For stuff I care about (mostly photos), I back them up on two different services. I don't have TBs of those, so it's not very expensive. My personal code I store on git repositories anyway (like SourceHut or Codeberg or sometimes GitHub).
JottaCloud is "unlimited" for $11.99 a month (your upload speed is throttled after 5TB).
I've been using them for a few years for backing up important files from my NAS (timemachine backups, Immich library, digitised VHS's, Proxmox Backup Server backups) and am sitting at about 3.5TB.
I’ve added restic to my backup routine, pointed at cloud files and other critical data
I know the post is talking about their personal backup product but it's the same company and so if they sneak in a reduction of service like this, as others have already commented, it erodes difficult-to-earn trust.
On macOS.
My understanding is that a modern, default onedrive setup will push all your onedrive folder contents to the cloud, but will not do the same in reverse -- it's totally possible to have files in your cloud onedrive, visible in your onedrive folder, but that do not exist locally. If you want to access such a file, it typically gets downloaded from onedrive for you to use.
If that's the case, what is Backblaze or another provider to do? Constantly download your onedrive files (that might have been modified on another device) and upload them to backblaze? Or just sync files that actually exist locally? That latter option certainly would not please a consumer, who would expect the files they can 'see' just get magically backed up.
It's a tricky situation and I'm not saying Backblaze handled it well here, but the whole transparent cloud storage situation thing is a bit of a mess for lots of people. If Dropbox works the same way (no guaranteed local file for something you can see), that's the same ugly situation.
However, there is a very good reason for not backing up what is in effect network attached storage. Particularly for OneDrive, as it often adds company SharePoint sites you open files from as mountpoints under your OneDrive folder (business OneDrive is basically a personal Sharepoint site under the hood). Trying to back them up would result in downloading potentially hundreds of gigabytes of files to the desktop only to them reupload them to OneDrive. That would also likely trigger data exfiltration flags at your corporate IT.
A Dropbox/OneDrive/Drive/etc folder is a network mount point by another name. (Many of them are not implemented as FUSE mounts or equivalent OS API, not folders on disk.) It's fundamentally reasonable for software that promises backing up the local disk not to backup whatever network drives you happen to have signed in/mounted.
Except that before they did and then they didn't without any proper notification (release notes don't count for significant changes like this).
They should have just added a pop up or at least email or both, given a heads-up and then again when the change actually kicked in
The problem is not them not backing it up by default but:
* changing existing setting to backup less by default * essentially hiding the change from the user as it is not shown on directory exclude list
Regardless to the OP's issues:
- on macOS since 9.0.2.784 released in 2023 all .git folders are included in backups - Cloud drives are problematic to backup because they all use extension plugins to hide the network and your local disk only contains stubs instead of actual files. If Backblaze scans it fully it'll download everything and exhaust your disk space there's no easy solution here.
I don't buy for a minute they were trying to be "sneaky" to save some $$ I instead feel like for the majority of users they felt it was misleading to backup stubs only and would rather not brick user computers by downloading all the files. Remember they can't access your cloud disk directly so the only way they can get the file contents is by doing an fread and letting the cloud drive client sync the content on demand.
Feel free to reach out to me if you have any questions about setting up duplicati.
His daughter-in-law had gifted him a really nice new system. His old system wasn't too bad, either. He'd mostly been relying on an external USB HDD for data. He used Thunderbird for e-mail, which I am quite unfamiliar with.
As we worked on the migration, I collected all the apps and software he had been using, which he would need on the new system, and it wasn't much. I also complimented him on his "online hygiene" insofar as never clicking on suspicious links, or downloading suspicious software; his system had no malware and no shovelware, no unwanted browser bars or spyware was found.
We were completing the migration when I noticed a large discrepancy between the "new data" HDD space and the old data, but I needed to delete the old partition to complete the upgrade, and I flagged this with him: I said, "look this makes me uneasy: do you still want to move forward?" and he nodded approval, so I deleted the partition. Then we discovered that we had just lost many gigabytes of important data, such as was in his Firefox profile and his Thunderbird data, like all his email which had been downloaded locally. I turned white as a sheet and I was ready for him to sue me or something.
He was surprisingly sanguine about this, and he says, "What about Backblaze?" and I gaped at him, "You had an online backup of all this???" and he goes "Sure, here's how to install it..." and we installed his little Backblaze systray widget, and all his data began streaming back in. Nothing at all was lost, because he'd also been meticulous about using this app!
So that was the day I learned about Backblaze and their services, and I was intensely grateful to them for saving my bacon for sure, and we remained friends, and we finished the migration in one day, and he was grateful to me and my expertise, and not at all worried about the crippling data loss which I had incurred with my cavalier ignorance.
Trying to audit—let alone change—the finer details is a pain even for power users, and there's a non-zero risk the GUI is simply lying to everybody while undocumented rules override what you specified.
When I finally switched my default boot to Linux, I found many of those offerings didn't support it, so I wrote some systemd services around Restic + Backblaze B2. It's been a real breath of fresh air: I can tell what's going on, I can set my own snapshot retention rules, and it's an order of magnitude cheaper. [2]
____
[1] Along the lines of "We have your My Documents. Oh, you didn't manually add My Videos or My Music for every user? Too bad." Or in some cases, certain big-file extensions are on the ignore list by default for no discernible reason.
[2] Currently a dollar or two a month for ~200gb. It doesn't change very much, and data verification jobs redownload the total amount once a month. I don't backn up anything I could get from elsewhere, like Steam games. Family videos are in the care of different relatives, but I'm looking into changing that.
As for GUIs in general... Well, like I said, I just finished several years of bad experiences with some proprietary ones, and I wanted to see and choose what was really going on.
At this point, I don't think I'd ever want a GUI beyond a basic status-reporting widget. It's not like I need to regularly micromanage the folder-set, especially when nobody else is going to tweak it by surprise.
_____
[1] The downside to the dumb-store is a ransomware scenario, where the malware is smart enough to go delete my old snapshots using the same connection/credentials. Enforcing retention policies on the server side necessarily needs a smarter server. B2 might actually have something useful there, but I haven't dug into it.
Preferably cheap and rclone compatible.
Hetzner storagebox sounds good, what about S3 or Glacier-like options?
I assume when asking such a question, you expect an honest answer like mine:
rclone is my favorite alternative. Supports encryption seamlessly, and loaded with features. Plus I can control exactly what gets synced/backed up, when it happens, and I pay for what I use (no unsustainable "unlimited" storage that always comes with annoying restrictions). There's never any surprises (which I experienced with nearly every backup solution). I use Backblaze B2 as the backend. I pay like $50 a month (which I know sounds high), but I have many terabytes of data up there that matters to me (it's a decade or more of my life and work, including long videos of holidays like Christmas with my kids throughout the years).
For super-important stuff I keep a tertiary backup on Glacier. I also have a full copy on an external harddrive, though those drives are not very reliable so I don't consider it part of the backup strategy, more a convenience for restoring large files quickly.
Those $50 indeed sound high to me. I think I’d be fine depending on the Glacier backup, is that rclone compatible? What do you pay for it?
[0]: https://kopia.io/
If you have a folder shared with 10 people, most likely only a few files will be accessed by others and the rest is dormant on all but one machine. Downloading and storing all these files is an expense in transfer fees and to some extent a waste of local disk space.
For that reason, cloud sync tools no longer copy everything up front, but transfer on-demand. Most tools have an option where you can choose "Make available offline" that will make a specific folder always synced.
That said, silently excluding a folder is very problematic, even if there is a good reason for it.
I work on the open-source Duplicati backup tool (https://github.com/duplicati/duplicati) and we take special care to not silently skip things as this is likely to cause problems when you want to restore later. For instance, you will get a lot of warnings if you try to make a backup of a cloud-synced folder, as the cloud-sync cannot keep up with the speed of the backup.
If you like the pricing of B2 but not the backup tool, you can use a B2 bucket (pay per usage, not flat rate) and have Duplicati back up to the bucket.
I've also configured encrypted cloud backups to a different geographic region and off-site backups to a friend's NAS (following the 3-2-1 backup rule). It does help having 2.5Gb networking as well, but owning your data is more important in the coming age of sloppy/degrading infrastructure and ransomware attacks.
1. You have to check "show hidden files" in the web ui (or the app) when restoring and
2. If you restore a folder that has a '.git' folder inside of it (by checking it in the ui) but you DID NOT check "show hidden files", then the '.git' (or any other hidden file/folder) does not get restored.
Which is.. unexpected.. if I check a folder to restore, I expect *everything* inside of it to be restored.
But the dropbox folder is, in fact, not there. Which is a surprise to me as well. :(
Technically speaking, imagine you're iterating over a million files, and some of them are 1000x slower than the others, it's not Backblaze's fault that things have gone this way. Avoiding files that are well-known network mount points is likely necessary for them to be reliable at what they do for local files.
It's important to recognize that these new OS-level filesystem hooks are slow and inefficient - the use case is opening one file and not 10,000 - and this means that things you might want to do (like recursive grep) are now unworkably slow if they don't fit in some warmed-up cache on your device.
To fix it, Backblaze would need a "cloud to cloud" backup that is optimized for that access pattern, or a checkbox (or detection system) for people who manage to keep a full local mirror in a place where regular files are fast. This is rapidly becoming a less common situation. I do, however, think that they should have informed people about the change.
The technical and performance implications of backing-up cloud mount-points are real, but that's zero excuse for the way this change was communicated.
This is a royal screw-up in corporate communications and I would not be surprised if it makes a huge negative impact in their bottom line and results in a few terminations.
There are 2 components in my mind: the backup "agent" (what runs on your laptop/desktop/server) and the storage provider (which BB is in this context).
What do people recommend for the agent? (I understand some storage providers have their own agents) For Linux/MacOS/Windows.
What do people recommend for the storage provider? Let's assume there are 1TB of files to be backed up. 99.9% don't change frequently.
They're really proving lately that they are a company that can't be trusted with your data.
Don't even know why people rely on these guis which can show their magic anytime
* If your value your privacy, you need to encrypt the files on the client before uploading.
* You need to keep multiple revisions of each file, and manage their lifecycle. Unless you're fine with losing any data that was overwritten at the time of the most recent backup.
* You need to de-duplicate files, unless you want bloat whenever you rename a file or folder.
* Plus you need to pay for Amazon's extortionate egress prices if you actually need to restore your data.
I certainly wouldn't want to handle all that on my own in a script. What can make sense is using open source backup software with S3/R2/B2 as backing storage.
Most people (my mom) don't know what s3 and r2 is or how to use it.
I like how you can set multiple keys (much like LUKS) so that the key used by scheduled backups can be changed without messing with the key that I have memorized to restore with when disaster strikes.
It also means you can have multiple computers backing up (sequentially, not simultaneously) to the same repository, each with their own key.
also, you pay per-GB. the author is on backblaze's unlimited plan.
git reflog is your "backup". it contains every commit and the resulting log (DAG) going back 90 days. If you do blow away a remote commit, don't fret, it's in your reflog
# list all of the remote HEAD commits you've ever worked with
git reflog origin/master
# double check it's the right one
git log -5 origin/master@{2}
# reset the remote to the right one
git push -f origin $(git rev-parse origin/master@{2}):master
# (optional) reset your local branch
git reset origin/master@{2}
# at this point your local branch has time-traveled, but your working dir will be in the present state (e.g. all the relevant files will show as changed)The data I'd lose isn't recoverable from anywhere else. That's the entire point of an offline backup provider. If I wanted selective cloud sync I'd use a selective cloud sync product.
What makes this worse than a bad decision is that it was a silent one. No banner, no email, no opt-out prompt, just a release note buried so deep that the only way most people will find out is when they go to restore something that isn't there. That's not an oversight, that's a choice. I've been a paying customer for years. The product I bought backed up everything. That product no longer exists and I wasn't told.
That's a good warning
> Backblaze had let me down. Secondly within the Backblaze preferences I could find no way to re-enable this.
This - the nail in the coffin
Now I discover again through HN, that it's time to find another solution.
It really shouldn't take up much more space or bandwidth.
Personally: I had to go in and edit the undisclosed exclusions file, and restart the backup process. I've got quite a few gigabytes of upload going now.
You have to give Apple credit, they nailed Time Machine. I have fully restored from Time Machine backups when buying new Macs more times than I can count. It works and everything comes back to an identical state of the snapshot. Yet, Microsoft can’t seem to figure this out.
Edit: spelling errors and cleanup
It's always been just janky. A bad app that constantly throws low disk warnings and opens a webpage if you click anywhere on it. Being told the password change dialogue in the app doesn't work and having to use the website etc etc.
Just all round not an experience that inspires confidence. In comparison, Crashplan just worked.
Some restore options also do not let you move data to their cloud for direct download, which makes it feel like they do not actually want users to fully download and recover all of their files.
My experience using restic has been excellent so far, snapshots take 5 mins rather than 30 mins with backblaze's Mac client. I just hope I can trust it…
Doing it silently is disaster.
Making excludes doing it hidden from UI is outright malice, because it's far too easy to assume those would just be added as normal excludes and then go "huh, I probably just removed those from excludes when I set it up".
So I back it up to a NAS. I bought a Synology NAS (back before they turned into an evil company) which includes a Cloud Sync app which will connect to your Google Drive and sync changes every hour. It's technically sync not backup, but because all deleted files go into a "Trash bin" directory that you can set to never empty, it effectively works as backup for deleted files too (though you can't recover older versions of a file that still exists). The really great feature is that it has the option to sync all files that are in Google Docs/Sheets/Slides format as converted to Word/Excel/PPT. And the great thing about the backup running on your NAS is that it doesn't depend on your computer being on or anything.
I know Synology's considered an evil company now because they seem to tie you to their own hard drives now, but I don't know if there's anything else as easy to set up for reliably syncing consumer cloud files to a NAS. Hopefully there is though, if anyone else knows?
And of course, you can similarly run a backup program on your computer to back up your local files to it, as it's just a network mount.
Now, I:
- Put important stuff in a SyncThing folder and sync that out to 2 different nodes.
- Clone stuff to an encrypted external drive at home.
- Clone stuff to an encrypted external drive at work and hide it out in the datacenter (fire suppression, HVAC, etc).
It's janky but it works.
I used to use a safe deposit box but that got too tedious.
"The Backup Client now excludes popular cloud storage providers [...] this change aligns with Backblaze’s policy to back up only local and directly connected storage."
I guess windows 10 and 11 users aren't backing up much to Backblaze, since microsoft is tricking so many into moving all of their data to onedrive.
I hope Backblaze responds to this with a "we're sorry and we've fixed this."
Storage Box is a little more effort to setup since it doesn't provide an S3 interface and I instead had to use WebDAV, but it's more affordable and has automated snapshots that adds a layer of easy immutability.
`git reflog` is your friend. You can recover from almost any mistake, including force-pushed branches.
The configuration and logging formats they use are absolutely nonsensical.
I put a Nextcloud snap on a VPS in the same city. Fast and no limitations.
backup to real s3 storage.
llms on real api tokens.
search on real search api no adverts.
google account on workspace and gcp, no selling the data.
etc.
only way to stop corpos treating you like a doormat
Any suggestions for alternatives?
I never tried this particular Backblaze product because I don't trust a opaque blob touching my most valuable data nor do I trust unlimited plans that dont mention what the limits are, atleast in fine print.
We're also seeing this play out in real time with Anthropic with their poop-splatter-llm. They've gone through like 4 rug-pulls, and people STILL pay $200/month for it. Every round, their unlimited gets worse and worse, like I outlined above.
Pay as you go is probably the more fair. But SaaS providers reallllllly hate in providing direct and easy to use tools to identify costs, or <gasp> limit the costs. A storag/backup provider could easily show this. LLM providers could show a near-realtime token utilization.
But no. Dark patterns, rug-pulls, and "i am altering the deal, pray i do not alter it further".
They had a corrupted backup and no way to know it or to do anything about it.
I still like backblaze, they've been nice for the days where I was running windows. Their desktop app is probably one of the best in the scene.
You all skipped the most important part: 3, 2, 1 backup rule.
Basically, you all were using Backblaze as a centered backup system, what do you think it was going to happen???
You do not backup data and call it a day, you must have a process in place to go there and check random files and folders for corruption. This process would have warned you that the sync was not 1:1
The thing to empathize here is those who purchased these retail Backblaze plans fell into two buckets:
1. The technically savvy who were following the industry standard 3, 2, 1 backup rule, arbitraging the "unlimited" plan, waiting for the game to be over.
2. The technically unsavvy who believed in the "unlimited" plan
My bet is that 2 is screwed and that's majority of the users of this specific Backblaze plan.
This is likely to have rippling effects on Backblaze including their unrelated, object store plans. When there are choices available, people don't appreciate being ripped off and right now, there are a lot of choices in object stores.
It almost seems like they’re taking it personally as some kind of intentionally slight against them.
Most users would not want Backblaze to back up other cloud synced directories. This default is sensible.
To give a bit more context on the “why”: these cloud storage providers now rely heavily on OS-level frameworks to manage sync state. On Windows, for example, files are often represented as reparse points via the Cloud Files API. While they can appear local, they are still system-managed placeholders, which makes it difficult to reliably back them up as standard on-disk files.
Moreover, we built our product in a way to not backup reparse points for two reasons:
We wanted the backup client to be light on the system and only back up needed user-generated files.
We wanted the service to be unlimited, so following reparse points would lead to us backing up tons of data in the cloud
We’ve made targeted investments where we can, for example, adding support for iCloud Drive by working within Apple’s model and supporting Google Drive, but extending that same level of support to third-party providers like Dropbox or OneDrive is more complex and not included in the current version.
We are currently exploring building an add-on that either follows reparse points or backs up the tagged data in another way.
We also hear you clearly on the communication gap. Both the sync providers and Backblaze should have been more proactive in notifying customers about a change with this level of impact. Please feel free to reach out to me directly if you have any questions.
So I think you're setting up new customers with the same expectations I had after 14 years of service.
Support just told me 'Backblaze not backing up files stored by OneDrive is not a change in policy. It is the enforcement of policy that has existed since the inception of the Computer Backup service.'
It may be right, but it makes me feel like I have been buying a different product from you than I thought over the last 14 years & disintegrated trust in milliseconds...
This article answers the question, "What does Backblaze back up?" Backblaze backs up all of your data across all of the user profiles that are on your computer as soon as you install the client.
Backblaze believes that you do not need to worry whether you selected all of the files that you care about, put any files in a different location on your computer, or added new files that may not be included in your online backup. Therefore, Backblaze automatically selects all of your data.
This is at best flat out wrong, at worst a blatant lie. But this was what I thought I was buying and paying for. Turns out you do have to worry!
Don't lie about other stuff you don't back up. Very disappointed in Backblaze.
as well as giving users the rightful choice about such things
bad call
What is the point of Backblaze at all at this point? If you are a consumer, all your files are probably IN OneDrive or iCloud or soon will be.
I get that changing economics make it more difficult to honor the original "Backup Everything" promise but this feels very underhanded. I'll be cancelling.
I mean, they do one thing.
Looking forward to seeing if they respond.
I’m only in my 40’s, I don’t require glasses (yet) and I have to actively squint to read your site on mobile. Safari, iPhone.
I’m pretty sure you’re under the permitted contrast levels under WCAG.
Use this command in the developer tools console to change the color.
.default { font-family:Verdana, Geneva, sans-serif; font-size: 10pt; color:#828282; }
.admin { font-family:Verdana, Geneva, sans-serif; font-size:8.5pt; color:#000000; }
.title { font-family:Verdana, Geneva, sans-serif; font-size: 10pt; color:#828282; overflow:hidden; }
.subtext { font-family:Verdana, Geneva, sans-serif; font-size: 7pt; color:#828282; }
.yclinks { font-family:Verdana, Geneva, sans-serif; font-size: 8pt; color:#828282; }
.pagetop { font-family:Verdana, Geneva, sans-serif; font-size: 10pt; color:#222222; line-height:12px; }
.comhead { font-family:Verdana, Geneva, sans-serif; font-size: 8pt; color:#828282; }
.comment { font-family:Verdana, Geneva, sans-serif; font-size: 9pt; }For desktop browsers, I also have a bookmarklet on the bookmarks bar with the following Javascript:
javascript: document.querySelectorAll('p, td, tr, ul, ol').forEach(elem => {elem.style.color = '#000'})
It doesn't darken the text on every webpage but it does work on this thread's article. (The Javascript code can probably be enhanced with more HTML heuristics to work on more webpages.) {elem.style.color = '#000 !important'}Is this maybe a pixel density of iphone issue?
I wouldn't mind a darker and higher weight font though.
One day try throwing a pair on you'll be surprised. The small thin font is causing this not the text contrast. This and low light scenarios are the first things to go.
Whatever causes it, I do wear glasses (and on a recent prescription too) and the text is still very hard to read.
As for mentioning WCAG - so what if it doesn’t adhere to those guidelines? It’s his personal website, he can do what he wants with it. Telling him you found it difficult to read properly is one thing but referencing WCAG as if this guy is bound somehow to modify his own aesthetic preference for generic accessibility reasons is laughable. Part of what continues to make the web good is differing personal tastes and unique website designs - it is stifling and monotonous to see the same looking shit on every site and it isn’t like there aren’t tools (like reader mode) for people who dislike another’s personal taste.
Firefox users: press F9 or C-A-R
What is it supposed to do?
There is no mention of F9 on this support page either:
https://support.mozilla.org/en-US/kb/keyboard-shortcuts-perf...
Am I missing something?
When trying to copy files from a OneDrive folder, the operation fails if the file must be sync'd first.
I, for one, do not think it is fair to blame Backblaze for the shortcomings of another application who breaks basic funtionality like copying files.
https://techcommunity.microsoft.com/discussions/onedriveforb...
If they start excluding random content (eg: .git) without effective notice, maybe they AREN'T backing up everything you think they are.
Pinning this squarely on user error. Backblaze could clearly have done better, but it's such a well known failure mode that it's not much far off refusing to test restores of a bunch of tapes left in the sun for a decade.
It isn't user error if it was working perfectly fine until the provider made a silent change.
Unless the user error you are referring to is not managing their own backups, like I do. Though this isn't free from trouble, I once had silent failures backing up a small section of my stuff for a while because of an ownership/perms snafu and my script not sending the reports to stderr to anywhere I'd generally see them. Luckily an automated test (every now & then it scans for differences in the whole backup and current data) because it could see the source and noticed a copy wasn't in the latest snapshot on the far-away copy. Reliable backups is a harder problem then most imagine.
Also consider e.g. ~/.cache/thumbnails. It's easy to understand as a cache, but if the thumbnails were of photos on an SD card that gets lost or immediately dies, is it still a cache? It might be the only copy of some once-in-a-lifetime event or holiday where the card didn't make it back with you. Something like this actually happened to me, but in that case, the "cache" was a tarball of an old photo gallery generated from the originals that ought to have been deleted.
It's just really hard to know upfront whether something is actually important or not. Same for the Downloads folder. Vendor goes bankrupt, removes old software versions, etc. The only safe thing you can really do is hold your nose and save the whole lot.