This small document shows what computer science looked like to me when I was just getting started: a way to make computers more efficient and smarter, to solve real problems. I wish more people who claim to be "computer scientists" or "engineers" would actually work on real problems like this (efficient file sync) instead of having to spend time learning how to use the new React API or patching the f-up NextJS CVE that's affecting a multitude of services.
If the same level of vulnerability was as prevalent today as it was back then, civilization might collapse overnight.
Admittedly SSH wasn't around, but kerberos+rlogin and SSL+telnet was available. Organizations who cared about security would have SecurID tokens issued to their employees and required for login.
Dial-in over phone lines, and requiring a password, was much less discoverable or exploitable than services exposed to the internet, today.
That mail server used maildir, which...for those who are not familiar: With maildir, each email message is a separate file on the disk. Thus, there were a lot of folders that had many thousands of files in them. Plus hardlinks for daily/weekly/whatever versions of each of those files.
At the time there were those who were very vocal about their opinion of using maildir in this kind of capacity, likening it to abuse of the filesystem. And if that was stupid, then my use of hard links certainly multiplied that stupidity.
Perhaps I was simply not very smart at that time.
But it was actually fun to fit that together, and it was kind of amazing to watch rsync perform this job both automatically and without complaint between a pair of particularly not-fast (256kbps?) DOCSIS connections from Roadrunner.
It worked fine. Whenever I needed to go back in time for some reason, the information was reliably present at the other end with adequate granularity -- with just a couple of cron jobs, rsync, and maybe a little bit of bash script to automate it all.
If you ever need to do something like this again, it's often faster to parallelize rsync. One tool that provides this is fpsync:
How he did manage to avoid lawsuits from Microsoft is beyond me.
[1] Server Message Block:
MS probably chose not to shut down that effort on the basis that it was enabling the MS stack in Linux.
I wish I could dig up an internal presentation that was prepared in the 90s for Bill Gates at the time, which evaluated the threat posed by Linux to Microsoft. I think they were probably happy that Linux now had a reason to talk to Windows machines.
Then they realized interoperability could make them more money, and they invited him and his team to Redmond for a week of working with MS engineers to understand the latest protocol versions. Oh wait, no, it was because the EU forced them. https://www.theregister.com/2007/12/21/samba_microsoft_agree...
Similar with header files. Issues arise if there is a "misuse" to derive actually not a compatible but competing solution.
One alternative I'd like to try is Google's abandoned CDC[1], which claims to be up to 30x faster than rsync in certain scenarios. Does anyone know if there is a maintained fork with full Linux support?
I always alias rsync to:
'/usr/bin/rsync --archive --xattrs --acls --hard-links --progress --rsh="ssh -p PORT -l USER"'
I almost never use any other program for file transfers between computers.
Fun surprise, rsync uses file size and modified time first to see if the files are identical. I build these ISOs with nix. Nix sets the time to Jan 1st 1970 for reproducible builds, and I suspect the ISOs are padded out to the next sector. So rsync was not noticing the new ISO images when I made small changes to config files until I added the --checksum flag.
I think it'd be a good idea for rsync to not trust timestamp 0.
When pulling data on MacOS from a Linux computer I would experiment hangs even when setting protocol versions to 29 or 28. What fixed for me was to just switch to the samba rsync program on MacOS.
https://news.ycombinator.com/item?id=43605846
Most openbsd people I know install the real version from ports.
I use it to back up a few virtual machines that, in the event of a site loss, would be difficult to rebuild but also critical to getting our developers back to work. I take an LVM snapshot of the VM, then use bdsync to replicate it to our backup server, and from there I replicate it off to backblaze, then destroy the snapshot.
If, however, you just want a copy of a block device on another system, like for weekly backup (our case), it's probably overkill. Especially as to keep it truly consistent you need to run in the mode where writes are acked only once the remote AND local devices have it.
My VMs are running on ganeti, which has a mode where the backing device can be DRBD and written to another host. Which works great if you have the extra disc space and can deal with the latency. Also allows you to live migrate VMs between the two hosts.
In my case I ultimately want the copy off-site, so DRBD isn't really a great fit.
DRBD is very good stuff though, I've used it for decades for HA database servers and the like.
It never crossed my mind Linux at some point only had 2441 files and you could actually parse the code that went through a new version, that time has sailed