It will be able to help you more often than not, and it's FOSS.
I asked my father to just give up on it. Just manage pictures through the file system. Photos (former iPhotos) databases also get corrupt on a Synology NAS. It's a matter of time. The strange thing is that it can go well for months on end, giving you a false sense of security.
Do iCloud or local drive backups or stay away from Time Machine and Apple Databases is my advice (although my father had a lot of problems as well with an "incorrectly unplugged" external HFS+ drive). I can't find the sources right now but after my father's last drama (and there have been several) I did some intensive searching and this was my conclusion.
What’s more likely, is Apple’s notoriously unreliable implementation of SMB causing the problem (and that’s the only option now that AFP support on Mac is dead)
I have a Synology DS220+ and connecting to it from a Windows machine vs a MacOS machine is like night and day.
On Windows, it literally feels like the NAS is an extension of my local hard drive. Browsing huge directories of thumbnails is snappy, file and folder names appear instantly. It’s a dream.
On MacOS, connecting to anything over SMB is a total nightmare. Aside from the constant mounting and unmounting (fun!), it’s just plain unreliable and slow.
And people have been complaining about this for years.
What’s even more funny, I have a friend who works for Apple and apparently they use NAS storage in some teams and deal with the exact same annoyances!
If Apple’s own employees have this problem, it’s hopeless they'll ever fix it for customers.
I’d put my money on Apple’s SMB implementation being the root cause of this file corruption issue that has been all over the Reddit Synology user forums lately.
It used to be over Apple's proprietary AFP protocol. With the exception of Apple's now-discontinued Time Capsule product line, all NAS implement AFP using the open-source Netatalk, presumably with reverse-engineered AFP protocol. And it's unreliable.
With recent versions of macOS, Time Machine switches to SMB protocol. Apple has a custom SMB implementation, and all NAS use Samba. And it's still unreliable!
I guess the only reliable solution to use Time Machine over network is to use a Mac with File Sharing over SMB enabled. At least both ends run Apple's SMB implementation.
I wonder if NFS works any better? Or maybe Apple's old AFS/AFP, which I think used to be more solid than SMB on Macs? Though I read something about Apple deprecating (or removing?) AFP support recently.
Are there any other options?
...I suppose anyone who has the time to work on this problem could find the last open-source release that Apple used, and port it to the newer versions of macOS.
How is AFP support on Mac dead? I’m doing Time Machine backups to synology via AFP
Are we sure this isn't a Synology issue? I'm all-SMB for both shares and Time Machine/CCC backups on a QNAP NAS and have never had an issue. (Caveat: I moved from Lightroom to Photos this year, and am now using iSCSI APFS volume for that.)
And we are now coming to 2022, let me say this has been the cases for decades. Their SMB implementation has gotten better in the past 5-6 years but it is still far from the fit and finish on windows and linux.
There are no destructive edits in Lightroom unless you really go out of your way to cause the destruction
It also has client-side face recognition / clustering which relies on a local database, indexing by geographic location for GPS-tagged images, etc.
Essentially nobody needs Lightroom until they try it, after which it easily becomes impossible to live without and there is no replacement
I myself use NextCloud for everything, I recently moved from Android to iOS and it's nice to see most things working... except that NextCloud has issues making previews from .heic pictures (or I should say heic picture containers containing heif images coded in hevc :s), and so the drama starts again. It's always plug and pray outside the Apple ecosystem, always ymmv.
The "memories" slideshows that iPhone or gphoto generate are sometimes also very nice to see.
The search functionality also comes handy once in a while. So that I can search for pictures of certain things.
For sharing photos the shared albums are also very easy to work with. Both to create and the receiver to import any interesting photos to my/their library.
Nowadays we don’t have events anymore, it’s just a continuous flow or random photos that may or may not belong to a specific event. The great part is that photos are always in chronological order and I never have to deal with “files” (copies, same names, etc).
The only exception are professionals and Photos.app definitely isn’t intended for them.
I used AFP which was recommended back then and that worked really well for years (since 2011). But since (maybe) Catalina, issues started creeping up and it would just randomly fail. It used to be once in a while, then it became a weekly occurence before I gave up.
Samba isn't better, my mounted shares get randomly disconnected overnight fairly often too (even now on Monterey), and switching from the old Synology to a fresh dedicated NAS machine didn't change a thing.
At that point I think in general it's just "local networking" that became less reliable around that time, whether it's some power saving feature, or something else up the stack, I don't know.
The only scenario where Time Machine works flawlessly for me is using an external SSD drive for backup, formatted as APFS. At least for now.
At this stage I think Time Machine is barely fit for purpose for backing up over the network. I've lost days on this issue over the years too.
I have always been totally confused as to whether I should be using AFP or SMB (tried both). As others have said, SMB often seems very unreliable, and AFP is supposedly being deprecated...
Mounting network shares from my Synology to Mac(s) is never flawless either. As other comments have noted, this experience is very much worse than what it used to be like in Windows (not that I've mounted network shares in Windows for a while).
The critical setting for reliability is to use AFP and not SMB. To this end I have two Synology NAS devices—a multi drive unit shared as SMB for general use and a single drive unit shared as AFP for Time Machine. While I do have occasional backup trees go bad (once every two or three years) the backups themselves are still fine and so I just start a fresh one.
I've been using Time Machine with a QNAP SMB target (for as long as Time Machine has supported SMB) without problems. Reading this thread makes me wonder if the problem is actually Synology NASs.
The lesson I learned was - do not use TimeMachine.
Now I have been using Vorta (UI for Borg) https://github.com/borgbase/vorta for a long time and everything is fine.
Same. I really like it. Also sending my backups to https://borgbase.com which was relatively painless to setup.
Synology can do all the attributes the HFS+/APFS can do. They do not use standard samba *_xattr modules though, they use their own and the result is all the @eaDir stuff. Do not delete it!
On the other hand, Time Machine is perfectly capable of damaging its archive on the local drive just fine.
Well, my Time Machine keeps corrupting backups even when targeting the Apple Time Capsule, so I'm not sure if that really helps at all.
omg reading this made me relive that horror I encountered too, of course with pics of the little one.
Now I have 3 backups (of which 1 is just an export again of all the pictures in raw) in 2 locations, using Lightroom, I don't trust photos anymore, and yes on apple filesystems.
My solution was for the LR database to live on local SSD, and the library/catalog on an SMB share on a ZFS server, with weekly backup to AWS Glacier. Other than having to reconnect anytime I wake up the machine from sleep, it works pretty well, and I never have worried about corruption.
Only flaw in the plan AFAICT is that if a bug in LR or the Mac introduce corruption XFS will happily store the checksummed corrupted file. I should probably add ZFS snapshots.
I have a 3x backup combo of Time Machine to local Synology (and I may as well not bother with this), Arq Backup to Arq Cloud (not flawless but I trust it more than Time Machine) and CCC to local USB-C SSD drives.
CCC is the only backup I actually trust out of the 3, but it's not automatic and relies on my plugging my backup SSDs in occasionally to clone the whole drive. (Which is more an issue with my workflow than CCC itself).
But ignoring that: CCC is an excellent backup solution for MacOS. Have been bitten by Time Machine twice. Still use it, but do not rely on it.
One of these days, I'll plan on writing my own photo management system, with recoverable indexing, optional facial scanning, and zero phoning home.
I have the following setup: my main machine is Windows and my photos are on a local NTFS drive, that is backed up (mirrored) on a Netgear NAS via rsync; I rotate the backup drives on a weekly basis.
Every time the backup job runs it sends an email to tell me how it went. The email is sent via gmail. This used to work well but at some point (a year ago maybe?) gmail decided this was "unsecure" and stopped forwarding the emails. Instead it sent a notice that "somebody tried to send an email and we blocked it".
I couldn't be bothered to fix it and accepted the gmail warning notice in lieu of the actual Netgear backup report.
Then eventually I upgraded the email notification system... only to find out that the backups were failing systematically.
Luckily nothing was lost as the main drive was fine; I was able to fix the problem and do the backups correctly.
But of course it could easily have gone a different way: the main drive could have failed and with empty mirrors there would have been no solution, and I would have had only myself to blame.
It is so easy to tell oneself that everything is a-okay when it really isn't.
Also, major plus: it supports lossy conversion, which churns out files ⅓ the size of the originals, with no perceptible loss in quality. I ended up converting my entire photo catalog, saving hundreds of GBs of disk space. The tool has a CLI as well.
https://helpx.adobe.com/camera-raw/using/adobe-dng-converter...
> DNG strips out most of the unrecognized meta data (such as Active D-Lighting, Picture Controls, Focus Point, etc) from RAW files, making it impossible to retrieve this data from DNG in the future.
The last time I used a DSLR was when I was doing color research in 2012. At the time, raw format was the only thing that preserved all necessary information to make scientific observations.
Most people don't need to care about such things. I just wanted to mention you're irretrievably throwing away metadata when you DNG-dong your pictures.
It's probably a worthwhile trade in most cases though.
Which "RAW" format? I think that is the problem - there are many and few are completely documented, which is where DNG comes in (https://www.adobe.com/content/dam/acom/en/products/photoshop...).
Great point about throwing away metadata, and probably worthwhile to safeguard that one's flavor of RAW files (CR2/NEF/etc) can be reliably read in the future when the software necessary to read them inevitably disappears from the cloud.
The dng converter is a useful tool, though if using Lightroom it’s an extra step. Usually camera makers supply some software that can do the same.
Generally the raws have a lot more information than the lossy photos, so if you need to do some editing (up shadows or darken highlights.. it’s worth keeping the raw around.) but generally jpgs are quite good. (In photoshop I’ve converted a raw and loaded both images and did an diff.. you can see where the changes from compression happen but it’s quite minor)
For me this is the perfect combo/workflow and I'm happy to pay the subscription. LRCC has some bugs and the gallery view could be better though.
The best thing is that I manage the photo files and folder structure locally with LRC, and can back them up easily, and maintain a good archive without having to sync everything.
I found Apple Photos way of taking full control over your photos super painful. The app is terrible and some tasks are so inefficient. Changing your primary photo library to another disk for example will take days - unless you have an SSD. This is absurd. The library is one folder called `Photos.photolibrary`, but the app seems to need to read every single file. And so many other issues too.
Edit: Quick googling finds many people using Macs report files are corrupted when written to a Synology NAS over SMB.
Having said that people need to take care backing up their catalog because without that single catalog file you will lose decades of photo editing work in Lightroom.
The title of this post should be changed to “file transfers from MacOS to Synology NAS causes file corruption”
It’s a known issue with no fix at the moment.
Coupled with using a good directory structure, that significantly lessens the impact of losing a catalog file.
Lightroom can move or copy images to a new folder for you, but they're still just normal image files you can open with any supported software.
I wrote my own consistency checker, as I wasn't happy with what was out there. I wanted it to be simple, and maintainable in the long term (>10 years horizon). See https://github.com/jwr/ccheck if you need something like this. I now update my checksums regularly and check for corruption.
That it’s not true is pretty much the reason why ZFS was created, though lots of people still don’t want to hear it, including companies (APFS only cows and checksums metadata for instance).
[1] https://www.backblaze.com/blog/the-3-2-1-backup-strategy/
[2] Forbidden Arts of ZFS | Episode 2 | Using ZFS on a single drive [https://www.youtube.com/watch?v=-wlbvt9tM-Q]
Ubuntu/Manjaro comes with default ZFS support. There is no administration involved beyond the initial setup.
If you run an OS with first-class ZFS support and your files arrive on the platform as soon as possible over as few intermediaries as possible, the chances of such mishaps are greatly reduced.
I finally upgraded to a synology nas and I feel a little silly for not doing it earlier.
It's far more convenient given I bounce between 3 machines during the day (personal laptop, work laptop, desktop), my wife can easily access files on her machines as well.
Added bonuses are that it comes back up on its own after a power outage, doesn't require my desktop to be on for me to hit it externally or for a successful backup, and I don't have to remember where I last placed the usb drive.
Basically - USB drives totally work, but the NAS is better in pretty much every way outside of price (and possibly some configuration, if the person isn't very technical).
My old laptop is effectively my NAS server.
2. Wireless HDD are expensive. I think most people actually dont need NAS where the files are shared by everyone in the network. They just want to wireless access their personal file.
I have been made a fool of.
If you are using unreliable connection, use something that can verify the transfer, rsync, for example.
How did you get that? From reading the article it sounded like they just moved their RAW files to the NAS (no Time Machine involved). One thing they use for backup is Time Machine, the RAW files came from a USB drive that they had wiped, but they forgot to attach it during any backup so they were omitted from the Time Machine backup. The details were light in the post so I'm inferring most of this. I'm not sure how "Apple's proprietary Time Machine format" (it's a disk image and the actual data is just files) were to blame.
Time Machine on a NAS (unrelated to Synology) can have problems. macOS generally tells you "verification" failed and recommends starting a new backup...which sucks, is time consuming, and you're at risk of data loss during the initial backup, but is way better than keeping around a corrupt backup. I've used a Time Machine network backup as one form of backup for many many years; both hand rolled and on a Synology. The times I get a corrupted backup I've: manually mounted the backup disk image while a backup started or I disconnected or closed my laptop while backing up. I'm not sure if the problem is on Apple's end or on the server's implementation of Time Machine or the SMB or AFP implementation. I've generally been able to "repair" these backups by running disk utility (fsck) and changing the value in a config file, but that can take a very long time over a network and I'm not sure I'd trust that backup anyway.
I think the cause of the problem was close to what the author suspects. Something got corrupt in reading, transferring, or writing the data. Maybe it was non-ECC ram, bits flipped in transit (due to hardware or protocol), or a corrupt disk. The source data seemed ok since he could recover it, the disks appear ok since they wrote ok a second time--but maybe it wrote to a different area of the disk? I've had RAM issues that only showed up under certain circumstances. Maybe "Enable data checksum for advanced data integrity" was on the second time? I kind of think confirming data was written correctly and using a filesystem that protects against bitrot (or keeping hashes of the files yourself) are the most accessible ways to prevent this. Unfortunately, knowing the hashes no longer match means you just know the files are corrupt and won't help you unless you have another copy.
It was an eye-opener to me to realize that data corruption is much more common than most people think.
On the flips-side because single (or few) bit flips often go unnoticed, people overestimate the impact of data corruption. The idea that if your image or video looks good then it must be intact is flawed. Even software binaries survive a bit of corruption pretty well.
I think in the long run file system based integrity checks everywhere would be great. For now, in a world with a multitude of storage technologies and file systems, shasums will have to do.
What's the point of using btrfs if Synology disables one of its signature features?
I have tens of thousands and they are probably my most treasured possession.
It’s quite concerning that they are languishing on Google Photos, with a few partial backups floating around that I’m not confident in.
I have had a few attempts at cleaning it up but haven’t found the right software.
I would also like to print my favourites, which probably also amounts to thousands of pictures, but working through the sheer volume is quite intimidating.
Feels like the whole thing needs a better workflow around it.
Where do I even begin? What app do I use to go through them all?
I've searched and tried out many apps and I still don't have a good workflow.
I truly don't understand how the pros do it.
Keep this folder backed up! It's more important than the metadata catalog but now all the raw imagery is in one place.
Then maintain the lightroom catalog with lots of metadata, including using the facial recognition system for offline person recognition, the flag system enabling a 2-pass deletion review, star system for ranking photos by desirability, and add tags to everything. Go through each filter mechanism and develop a strategy for using each filter.
Use the image gallery filter system to the limit.
If you go through images by date & time captured you can usually bulk tag images much more quickly than if you have to switch between events.
I think it's a solved problem, just lacking in good turn-key solutions.
I know it may sound like overkill, but I switched from a Synology NAS to ZFS for the checksumming specifically to avoid this fate.
Not a knock on Synology specifically; I amazing all are about the same in this regard. What I wish is that there were a consumer friendly low power NAS running ZFS … (?) so I don’t have to maintain a server.
(And a weekly sweep to AWS glacier to be sure)
The whole ordeal was largely of my own making. I never anticipated that the simple act of copying files from one place to another could go so horribly wrong.
It isn't user's own making or user's fault at all! ( A voice is literally screaming this inside my head ) Tech shouldn't be like this. The DS218 support BTRFS which prevent certain file corruption but this is not what's happening either.
I have been saying ( or ranting ) about this for now more than a decade. [1] And every time the answer was there are no market for it. Or consumer or market are not willing to pay premium for high quality NAS. The Kobol NAS was my hope that a high quality, reasonably priced, and somewhat reasonably easy to setup and managed NAS would be. But they are closed down as well [2].
I really hope Kobol release all of the work as open source including hardware design. May be someone ( or me ) could crowdfund it once the chip shortage is over.
As an engineer, I'm good doing it for enterprise stuff, but it's really challenging for personal stuff.
I don't trust Google products anymore, I got burned too many times when they killed their products. They work, but I just don't want to use them anymore.
I currently have a 100GB OneDrive subscription for $3, so that works for now, but I'm pretty close to the storage limit. I'm not a Microsoft fan, but considering it's integrated with Windows now, I assume it will stick around for long.
The same is probaly true with Apple and iCloud.
I've relocated internationally a dozen times, so a NAS doesn't work for me.
There are also other options for just storage like S3 or Azure storage, but the price seems to be almost the same as OneDrive.
Same here.
I used to use 2x Seagate portable 5TB. Would mirror one to the other every week.
Now I use a Western Digital MyBook 20TB in RAID 1, and backup this weekly to my portable externals using Carbon Copy Cloner for macOS (ideally I would have an identical second one for this).
And run Backblaze for offsite backup.
I avoided a NAS because I prefer the speed of USB3 versus network - super quick to view RAW and video, and for ease of relocation.
I also have a 1TB portable SSD for my Apple Photos syncing iPhone content without filling up my MacBook disk.
The WD MyBook's are a bit dodgy though, there are so many horror stories. They encrypt all data, so if your RAID config is corrupted...which is supposedly stored on the hard disk, then you are screwed. Also, when you tell it to sleep when inactive, whenever you open a file dialog, the laptop freezes for 5/10s while it wakes up. There is also a bug in the utility application that says "can't access drive" which makes you think your drive is broken but it's actually just a software bug. Not running the utility software fixes it. I also think it can cause some unable to sleep issues for macOS. Sometimes it is super loud too, and its quite noisy.
I use it to make daily incremental backups of my NAS, but you could replace the NAS by a USB drive.
If only they would actually sync files outside the app data dir like they do on desktop.
In the vast majority of cases spontaneous data corruption happens in _transit_ due to RAM glitches.
All modern drives implement forward error correction on per sector basis. This allows the drive to automatically repair up to 10% of damage to any given sector... in which case correct data is returned to the requestor and the sector is either tagged for relocation or is relocated right away. In cases when the data can’t be cleanly recovered, the read request us failed.
That is, chances of a read request returning mangled data from a disk is next to absolute zero. Meaning in turn if you do see data corruption, it happened before this data hit the disk - i.e. it happened in transit.
While memory does fail sometimes, if it was failing at the rate you describe PCs would not be suitable to any work at all.
I have worked in ops for many years. A lot of software that copies files is perfectly happy to leave you with copy of different length than original.
The context of the OPs post and my reply is the case when copying in bulk with a mature tool yields corruption in a small fraction of the data. In this case the cause is the in-transit corruption (rather than at-rest, which is a fairly common belief dubbed as a “bitrot” phenomenon).
https://github.com/mleonhard/deduposaur
I tend to end up with multiple copies of photos and other docs: SD card, laptop, backup drives, and USB thumb drives. Deduposaur helps me to move files into my latest backup drive. It flags duplicates, modified files, and files that I previously deleted. It also checks each file's SHA-256 digest to detect corruption.
The program is usable but needs some usability improvements and more tests. The license is Apache 2.0. I am happy to receive problem reports, feature requests, and pull requests.
Not that this kind of money is relevant when it's your personal files that are at stake, just a small observation plus a recommendation for whoever reads this and didn't know about it:
For Lightroom, I keep the library on my local SSD drive and the photos on the NAS. One thing to note is that you can import photos into a Lightroom library in-place (from wherever they currently reside), or you can have Lightroom copy or move them to a location of your choice, with a few options around how to organize the files. In this case, the Lightroom software itself could well be the cause of the corruption, as it is doing the copying. I copy the files to a destination on the NAS using a drive-mapped letter, but it works with UNC paths as well. On Windows, I have ever experienced any corruption in about 8 years of using the Synology/
The only sane way to deal with unique/irreplaceable data (i.e. one’s own photos) where the authoritative copy is on a mac is to make several independent copies, with several kept offline, and assume that the macOS or Adobe’s terrible house of cards that is the creative suite is going to fuck everything up at some point.
I say this as someone with >1000GiB in a Lightroom catalog, run on a mac.
I have a copy on an always-connected external disk (made with rsync && rsync -c), a copy on a ZFS NAS (made with rsync && rsync -c user@host:), a Time Machine backup that is only connected weekly, and an offsite rsync to a USB drive, with a SHASUMS file sidecar.
I also copy the RAWs into the LR masters photo structure when I import them, and they go first from the camera into a Syncthing folder (pre-import) that is immediately and automatically replicated onto four machines in three buildings (two of which are ZFS with autosnapshots).
REFS has an option for integrity checking, which (unfortunately) might be off by default.
I was using a WD myCloud that I retired when we moved house a couple of years ago as a secondary backup but that’s not a runner anymore. I’ve 32TB RAID6 volume on my personal office server and I’ll probably use that as a separate offsite backup.
We do use Time Machine with a Time Capsule that I’ve recently replaced the disk in. I don’t really trust it and also pay for iCloud Drive for most of my other data. Work also provides Google Drive which I use for data that is shared with others.
It’s all a bit of a mess but I think I’ve got some protection in place.
I had hardware-related flakiness with a Windows system at the time. I swapped it out for an iMac but sometime later I discovered that there was some residual corruption of my Lightroom catalog even though I had restored a backup that seemed to be OK. Fortunately I was able to use some SQLite tools to largely fix the catalog.
https://bitmason.blogspot.com/2013/01/recovering-corrupt-lig...
Also from what I recall, Apple SMB is based on a forked version of Samba from 10 years ago. They did this when Samba went GPLv3. Not clear how well they’re keeping up with quality improvements.
My solution: use WebDAV on your Synology for transferring files from a Mac or Linux host. It has its own quirks but is far more reliable. Unfortunately this won’t work for time machine. But would work with “rsync -aXv —partial”.
Just store the raw files directly in a regular file tree structure and avoid any kind of catalog/index/db file.
I do find ZFS very interesting but in practical use it has been trouble working with, getting weird errors when trying to remove a vdev.
It has all the features of Lightroom with some extras.
Moreover Capture One is missing some features which Lightroom has: The ability to filter collections based on their name. A shortcut for 'increase/decrease rating'. Filter photos for 'x stars and higher'. Show all images in a folder, including those in subalbums/subfolders. Being able to mark photos with a 'Reject' flag/rating (and easily hiding, not removing, those).
I'm still fairly new to photography and use a few preset packs that I've paid for as a base before making modifications to them to my liking. I downloaded a program that converts Lightroom presets to LUTs, and those worked decently, but they were visibly inferior due to the fact that the converter cannot handle camera/lens profiles.
I ended up just going back to Lightroom Classic for now, but hopefully once I progress enough to make my own edits from scratch I'll be able to switch back to Capture One.
Immutable files should be the easiest case to handle reliably.
exiv2 -e p4 image_name.NEF
Apparently this jpg is what the camera shows on its backscreen when you're previewing the image/RAW file.
I wonder if the OP also has a Fusion drive - merely reading off a corrupt SSD cache or bad RAM would explain some of the damage.
https://fstoppers.com/post-production/most-important-setting...
I lost about a year of family photos thanks to this sort of issue once so I am very wary of problems.
I do photography as a hobby so I never saw a need for backing up photos and I never paid for the subscription (which would include cloud storage) because I didn’t use any of the tools that came along with the subscription.
Lightroom catalogs also don't contain any images, I think a lot of people are confused and think it does.
I've had a DS412+ since, well, 2012. I've dutifully replaced drives as they went bad, run disk scrubs, file system scrubs, and (in the last nine years) had to occasionally run time machine restores from it. It worked like a champ each time, including "restoring" from an old computer to a brand new one. However, in the last couple of years, we couldn't get my husband's machine to back up or restore successfully, and now out of our three macs, only one is able to successfully back up at this point, and (looking at this) I'm skeptical that it's actually working.
That's not the conclusion the author came to - why is it the one you came to?
I use a Synology with a Mac nearly every day. I don't use it for Time Machine, but that's just because I use different backup strategies, like storing the files directly on the Synology, or copying them over after local editing depending on the type of file. Also, like the author learned, I check my backups periodically to make sure the file is the same on the other side.
> I'm skeptical that it's actually working
You should test it then. If you've never tested you backups, you have none.
I have, and so far we're one out of three.
Mac A seems to restore (at least the last time we tried it) Mac B failed to restore Mac C fails to back up
I don’t think this is a Synology problem. No Windows users are reporting this issue. On the other hand, NAS devices have always been treated as second class citizens by Apple.
If reliable network attached storage is important to you, maybe it’s time to move on from MacOS?
As much as I hate Windows, using my NAS with it is a dream. It’s how the experience of network storage was supposed to feel.
This is mostly not about Lightroom, or photos, or any of those specifics. This is about (a) a failure in a migration followed by (b) *failure of the user to verify the migration before deleting the original copies*:
>To make matters worse, I deleted the photos from the USB hard drive, without verifying the new catalog.
That line is, not coincidentally, where I stopped reading.
You don't get all the cool features, but process is pretty safe and you don't worry about hackers because it is offline most of the time.
I’ve used Arq to backup about 2TB of external drives to AWS Glacier. It even let me set the encryption key for the data myself!
1. Lightroom runs on Windows 10 system
2. I add new images onto it from one set of cards. The cards are pulled out and sat on a desk. A previous set of cards is added back to the camera and wiped. At this point there are images in the Lightroom and images on the the cards.
3. Every night, windows box rsyncs the Lightroom catalog and raw data onto a Linux box running in a RAID-1
4. Every night, after the rsync is completed Linux box triggers a backup of the rsync'ed catalog and files into S3 compatible remote bucket.
This means that I have all images and catalog stored at least twice locally and once remotely. I have also validated that I can recover from a failure of Windows disk, Linux server and remote backup.