If you care about power consumption like I do, you can Google "$model energy consumption white paper" which contains very accurate data about idle usage, for example https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-energy...
In one case I had a nuc where on Linux after enabling power saving features for the sata controller, idle usage even fell to 5W when the pdf claimed 9.
Having an actual pc instead of a random sbc ensures best connectivity, expandability, and software support forever. With almost all sbcs you're stuck with a random-ass kernel from when the damn thing was released, and you basically have to frankenstein together your own up-to-date distro with the old kernel because the vendor certainly doesn't care about updating the random armbian fork they created for that thing.
I cannot say no to cheap compute for some reason.
I sympathise - for me I think it comes from growing up with early generation PCs that were expensive, hard to get, and not very performant; now you can buy something that's a supercomputer by comparison for almost nothing. Who can resist that?! I'll think of something to do with them all eventually...
The only downside is that it doesn't have space for multiple 2,5/3,5 disks, but that is just personal preference anyway.
The Dell's coming off lease now have modern features including Intel 8th Gen CPUs with TPM and USB-C etc...
they are my favorite 'home server' currently...cheap, standard, and expandable - oh! And SILENT! :-)
I bought them them for 50 euro a piece without RAM and disks.
Examples:
- PINE64 Rock64 running FreeBSD 14.1 replaced an aging RPi3. I use this for SDR applications. It's a small, low power device with PoE that I can deploy close to my outdoor antennas (e.g. 1090mhz for dump1090-fa ADS-B). It's been really solid with its eMMC, and FreeBSD has good USB support for RTL-SDR devices.
- PINE64 RockPro64 running NetBSD 10. I have a PCIe card with a 500gb SSD M.2 slot. NetBSD has ZFS support and it has been stable. This lets me take snapshots on the SSD zpool. I generate time-lapse videos using the faster cores.
You don't get 100% HW support (e.g. no camera support for RockPro64) but I don't need it. The compromise is worth it in my case because I get a stable and consistent system that I'm familiar with: BSD.
However there are many NUC-like small computers made by various Chinese companies, with AMD Ryzen CPUs and with 2 SODIMM sockets, in which you can use 64 GB of DRAM.
Those using older Ryzen models, like the 5000 series, may be found at prices between $220 and $350. Those using newer Ryzen models, up to Zen 4 based 7000 or 8000 series, are more expensive, i.e. between $400 and $600.
While there are dozens of very cheap computer models made by Chinese firms, the similar models made by ASUS or the like are significantly more expensive. After the Intel NUC line has been bought by ASUS, they have raised its prices a lot.
Even so, if a non-Chinese computer is desired and 64 GB is the only requirement, then Intel NUCs from older generations like NUC 13 or NUC 12, with Core i3 CPUs, can still be found at prices between $350 and $400 (the traditional prices of barebone Intel NUCs were $350 for Core i3, $500 for Core i5 and $650 for Core i7, but they have been raised a lot for the latest models).
EDIT: Looking now at Newegg, I see various variants of ASUS NUC 14 Pro with the Intel Core 3 100U CPU, which are under $400 (barebone).
It should be noted that this CPU is a Raptor Lake Refresh and not a Meteor Lake CPU, like in the more expensive models of NUC 14 Pro.
This is a small computer designed to work reliably for many years on a 24/7 schedule, so if reliability would be important, especially when using it as a server, I would choose this. It supports up to 96 GB of DRAM (with two 48 GB SODIMMs). If performance per dollar would be more important, then there are cheaper and faster computers with Ryzen CPUs, made by Chinese companies like Minisforum or Beelink.
https://michael.stapelberg.ch/posts/2024-07-02-ryzen-7-mini-...
Perhaps it can be configured to meet your requirement for "affordable"?
> Next Unit of Computing (NUC) is a line of small-form-factor barebone computer kits designed by Intel.
Ref: https://en.wikipedia.org/wiki/Next_Unit_of_ComputingThey look even smaller than one litre PCs.
1. Newer DAS devices connect using USB-C, but USB type-A/e-SATA ones can be found.
Edit: figuring out how to run TrueNAS as a guest OS was a nightmare, the first 5+ page of results will be about TrueNAS as a host.
The real benefit is the small form factor and the "low" power consumption. Paying 43 bucks for the whole thing - now asking myself if it is worth saving a few bucks and living with 100Mbit network speed, instead of spending 150 bucks and having 2.5Gig.
There are so many (also "used") alternatives out there:
- Fujitsu Futro S920 (used < 75, ~10W)
- FriendlyElec NanoPI R6C (< 150, ~2W, https://www.friendlyelec.com/index.php?route=product/product...)
- FriendlyElec Nas Kit (< 150, ~5W, https://www.friendlyelec.com/index.php?route=product/product...)
- Dell T20 / T30 (used < 100, ~25W)
- Fujitsu Celsius W570 (used < 100, ~15W)
My personal NAS / Homeserver:
Fujitsu D3417-B
Intel Xeon 1225v5
64GB ECC RAM
WD SN850x 2TB NVMe
Pico PSU 120
More expensive, but reliable, powerful and drawing <10W Idle.Since it has neither ECC nor support for common Open Source NAS Operating Systems, I still would not buy it as my daily driver. I just don't think that a difference of 5W Idle Power is worth the effort of milling out stuff, using USB-Storage and the additional maintenance effort keeping this system up to date.
I don't do cheap any more. But I can see the appeal.
Need extra drives, buy extra drives. Need extra NAS for backups?, buy an extra NAS. Need an offsite copy?, buy space and get an offsite NAS and drives for an offsite copy.
Price point of the unit doesn’t change anything here.
Just buy it and be done with it. It's certainly more expensive than DIY it yourself using off the shelf components and things bought out of online classifieds. But for most people that have no interest in tinkering or don't know what to do, just paying the price of a complete solution might be worth it.
Business documents, accounting records, family photos - sure you probably want to keep them safe.
But if my NAS is just 20TB of pirated movies and TV shows (not that I'm saying it is...) then I'm much more comfortable buying the cheapest drives I can find and seeing how it goes.
On the other hand terabytes and terabytes of pirated content is a lot of work but not necessarily worth paying to try and to backup over the internet. I can redownload it if I need but I'd rather not do that because some crap drive or NAS I saved 20 bucks on died and now I need to spend a week rebuilding my entire collection. It doesn't need to be Fort Knox but I'll spring for a proper NAS, drives, and pool redundancy for this content.
"sure, grandpa, drives make noise [rolls eyes]"
Very cheap, has served me for more than a decade now. Highly recommended. I dealt with dataloss through drive failing, user error and unintentional software bugs/errors. No problem.
And how are you accessing it when away from home? A VPN that you're permanently connected to? Is there a good way to do NAT hole-punching?
Syncthing kind of does what I want, in that it lets all my computers sync the same files no matter what network they're on, but it insists on always copying all the files ("syncing") whereas I just want them stored on the NAS but accessible everywhere.
Nextcloud kind of does what I want but when I tried it before it struck me as flaky and unreliable, and seemed to do a load of stuff I don't want or need.
- On mine I use NFS and SMB which covers most possible clients.
- I use an ssh bastion that I expose via Tailscale to connect to mine remotely. So a VPN but it's wireguard based so it's not too intrusive. I have a gig up, though, YMMV.
- My NAS has 28TB of space. I'm still working on backup strategy. So far it just has my Dropbox and some ephemera I don't care about losing on it.
- Regarding other services: I use Dropbox pretty extensively but these days 2TB just isn't very much. Plus it gets cranky because I have more than 500,000 files in it.
This is my personal setup but I think it's a bit different for everyone.Most mid range routers allow SSH, and have decent CPU
A colleague uses a QNAP instead, which he claims is better price/storage ratio at the expense of lesser software usability, and I'm okay paying a bit more of my own money (at home) as well as taxpayers' money (at work) on better usability, because it will likely pay off by saving time in the long run, as I currently don't have a dedicated sysadmin in my team.
The only question mark to date was when installing with non-Synology (enterprise SSD) drives I got a warning that mine were not "vendor sourced" devices, and decided not to take any risk and replace all drives with "original" Synology ones just because I can. This may be just disinformation from Synology to make their own customers nervous, and it reminds me of the "only HP toner in HP laser printers" discussion, but it would have been a distraction to investigate further, and my time is more valuable than simply replacing all drives.
On the LAN, I just use SMB. It is adequate for my needs.
For remotely accessing my collection of Linux ISOs, I use Plex.
Syncthing for a small collection of files I want available from all my machines - commonly used documents, photos, stuff I want quickly backed up or synced automatically.
Samba for my long term mostly-read rarely-write storage with larger files, ISOs, etc.
My nas is a Synology. Vpn is also used so that i can continue sending timemachine backups back home when i’m traveling.
Tailscale/Wireguard has been such a big leap forward.
https://github.com/libfuse/sshfs
For added security I limit my home ssh access to a handful of trusted IPs including my cloud VM. Then I set up an ssh tunnel from my hotel through the cloud VM to home. The cloud VM never sees my password / key
> However, at present SSHFS does not have any active, regular contributors, and there are a number of known issues (see the bugtracker).
Not that it is unusable or anything, it is still in widespread use, but I'd guess many assume it to be part of openssh and maintained with it, when it isn't.
An interesting alternative might be https://rclone.org/, which can speak SFTP and can mount all (of the many) protocols it speaks.
I usually just use zerotier for this, it's extremely lightweight
How come ZeroTier is 10X smaller?
My router has public IP so I didn't have any problems reaching it from the outside, so any VPN could work. Another approach is to rent some cheap VPS and use it as a bastion VPN server, connecting both home network and roadwarrior laptop.
No idea about any "integrated" solutions, I prefer simple solutions, so I just used ordinary RHEL with ordinary apache, etc.
Also I use SonicWall VPN to connect to my house to be in the network so it covers most of it. I also use Synology QuickConnect if I need to use the browser without VPN which also covers most urgent needs. Haven't failed me over a decade and my NAS also syncs with Synology C2 cloud which is also another peace of mind. I know it might sound unsafe a little having files stored on the cloud but it is what it is.
I won't play with half-baked library dependent homebrew solutions which cost way more time and cause headache more than commercial solutions. I won't open ports and forget them later either.
Syncthing over Tailscale is running smoothly too, it doesn't matter where my machines move, they find each other using the same internal address every time.
I built the service to keep the dns entry updated myself, so I'm sure it's not as secure as it could be, but it only accepts pings via https and it only works if the body of the POST contains a guid that is mapped to the dns entry I want it to update.
I barely use my tailnet now, might have more of a case for it later, but they are near the top of my "wishing you success but please don't get acquired by a company that will ruin it" list.
RAID support, NFS/SFTP/Samba support, a nice Web UI to set up access and configure sharing, and even the ability to enable sharing outside your own NAT.
- I use a lot of different folders within syncthing, and different machines have different combinations to save space where they aren't needed; the NAS has all of them.
- on the LAN, sshfs is a resilient-but-slower alternative to NFS. If I reboot my NAS, sshfs doesn't care & reconnects without complaint...last time I tried to use it, NFS locked up the entire client.
- zerotier + sshfs is workable-but-slow in remote scenarios
Note I'm mostly trying to write code remotely. If you're trying to watch videos....uh, good luck.
This is the point where I'd have thrown it in the trash and given up. I simply don't know how people have the patience to debug past stuff like this: I get that the point of the project is to be cheap and simple, but this is expensive in time and decidedly not simple.
"The distribution builder is a proprietary commercial offering as it involves a lot of customer IP and integrations so it cannot be public."
Seems like a supply side injector to me!
If you get a Pi 4 or Pi 5, or one of the Rockchip boards with RK3566 or RK3588 (the latter is much more pricey, but can get gigabit-plus speeds), you can either attach a USB hard drive or SSD, or with most of them now you could add on an M.2 drive or an adapter for SATA hard drives/SSDs, and even do RAID over 1 Gbps or sometimes 2.5 Gbps with no issue.
Some people choose to run OpenMediaVault (which is fine), though I have my NASes set up using Ansible + ZFS running on bare Debian, as it's simpler for me to manage that way: https://github.com/geerlingguy/arm-nas
I would go with Radxa or maybe Libre Computer if you're not going the Raspberry Pi route, they both have images for their latest boards that are decent, though I almost always have issues with HDMI output, so be prepared to set things up over SSH or serial console.
I picked up an HP ultradesk something or other for dirt cheap a while back. When I got it it turned out to be surplus stock, so not even second hand - was brand new, for maybe 20% the retail price. Dead quiet, and super power efficient. It's not the most powerful CPU, but it's 10th or 11th generation which is perfect for hardware encoding for my media server use case.
It does not have all the hardware for RAID and multiple hard drives and all that, but one NVME boot disk, and one 16TB spinning rust disk is more than enough for my needs. It's media, so I'm not worried about losing any of it.
These boxes are cheap enough that you can get multiple ones responsible for multiple different things in a single "deployment". At one point I had a box for NAS, a box for media server, a box for my CCTV IP cameras and a box running homeassistant. All humming along nicely. Thankfully I was never masochistic enough to try some kubernetes thing orchestrating all the machines together.
This is all obviously for the homelab/personal use case. Would not recommend this for anything more serious. But these machines just work, and they are bog standard X86 PCs, which removes a lot of the hardware config and incompatibility bullshit associated with more niche platforms.
- https://vermaden.wordpress.com/2023/04/10/silent-fanless-del...
I use that and the costs are about $20-25 for a used Dell Wyse 3030 and $60 for used 5TB 2.5 HDD from Seagate in USB 3.0 case.
Then the power bills will also be tiny as it draw about 3.8W when idle and 10.3 W with CPU and disks stressed to maximum.
The only limitation is 'only' 2GB RAM - but with ZFS ARC set to 32MB minimum and 64MB maximum RAM is not an issue.
% grep arc /etc/sysctl.conf
vfs.zfs.arc.min=33554432
vfs.zfs.arc.max=67108864
Regards. # smartctl -a /dev/da0
smartctl 7.4 2023-08-01 r5530 [FreeBSD 13.2-RELEASE-p1 amd64] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Elements / My Passport (USB, AF)
Device Model: WDC WD50NDZW-11A8JS1
Serial Number: WD-WXP2EA07UYLA
LU WWN Device Id: 5 0014ee 2be7aef41
Firmware Version: 01.01A01
User Capacity: 5,000,947,523,584 bytes [5.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 2.5 inches
TRIM Command: Available, deterministic
Device is: In smartctl database 7.3/5528
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Sat May 11 10:26:04 2024 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 4680) seconds.
Offline data collection
capabilities: (0x1b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
No Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 560) minutes.
SCT capabilities: (0x30b5) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0027 253 253 021 Pre-fail Always - 3508
4 Start_Stop_Count 0x0032 096 096 000 Old_age Always - 4626
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 083 083 000 Old_age Always - 12789
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 139
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 58
193 Load_Cycle_Count 0x0032 192 192 000 Old_age Always - 24665
194 Temperature_Celsius 0x0022 086 083 000 Old_age Always - 66
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
Selective Self-tests/Logging not supported
The above only provides legacy SMART information - try 'smartctl -x' for moreSo the price of used Sandy Bridge or newer laptop (optionally cracked screen) with 1Gbit ethernet, USB3, couple SATA, couple PCIE lanes (ExpressCard and mpcie slots) and build-in UPS.
OP has a perverse sense of humor :)
----
But, not to waste space on this mindless joke, here's my (or, more precisely, my wife's) success story.
So, I've had it with a laptop and built myself a PC. Money wasn't really a problem, I just wanted to make sure I will have enough of everything, and spares, if necessary. So, I've got a be quiet case with eight caddies and a place for a SATA SSD. It's expensive... but it doubles as my main workstation, so I don't have any regrets about spending more on it! It has a ton of room for installing fans. It has like ten of them at this point, plus liquid cooling. The wifi modem that was built into the mobo that I bought doesn't have a good Linux driver... but the case has a ton of space, and so I could stick an external PCIe wifi modem. And I still have plenty of room left.
Anyways. My wife was given some space for her research in the institute she works for. And they get this space through some unknown company with vanishing IT support, where, in the end, all the company does is putting a fancy HTML front-end on Azure cloud services, sometimes mismanaging the underlying infrastructure. While the usage was uncomfortable but palatable, she continued using it. Then the bill came, and oh dear! And then she needed to use a piece of software that really, absolutely, unquestionably needs to be able to create symlinks. And the unknown company with vanishing IT has put together their research environment in such a way that NAS is connected via SMB, and... no symlinks.
So... I bought a bunch of 4T hard-drives, and now she has all the space she needs for her research. She doesn't even pay for electricity :(
The fitting SBC was about the same price, the most expensive part was the high-efficiencý (GAN) wall-wart, and 2.5" Disk.
I know this, because I ordered this eons ago :-)
Still running somewhere, that thing. 24/7 since then, with some reboots, because updates...
Runs Armbian, if you like to, or anything else if you are willing to mess more.
Seems to be still on sale, according to https://www.friendlyelec.com/index.php?route=product/product...
Although, I can tell you what not to do: a 45 drive SAS/2 or /3 4U JBOD case takes way too much power to use all the time and uses screaming 1U fans and PSUs by default.
I do have 45 drives in various XFS on md raid10 arrays. Don't even mention ZoL ZFS because that was a fucking disaster of undocumented, under-tested, and SoL "support" for something that should only be used on a proper Sun box like a Thumper. XFS and md are fairly bulletproof.
Perhaps one can Ceph their way into significant storage with a crap ton of tiny DIY boxes, but it's going to be a pain to deploy and manage, take lots of space, and probably damn expensive to physically install, network, and provide UPS for.
But if you're open to paying that much, well, I was considering that specific board along with some others, then I found an entire 12th gen Intel mini-PC for only 50% more and immediately changed my mind.
He's wrong here. The most important thing with small files is latency, and a 1000M network will have significantly less latency than a 100M network.
Anyone running TimeMachine over network knows what I mean - local attached storage is blazing fast (particularly SSDs), wired network storage is noticeably worse performing, and wifi is dog f...ing slow.
I'm currently at 46TB or storage, and I recently threw in a 2.5Gbps NIC when I upgraded the rest of my home network.
(Mine certainly uses more electricity than the one in the article, but I pay $0.07/kwh, and run a few docker images that take advantage of the added performance, so I'm happy with it.)
Regarding that LaFrite board, I mailed a while ago LoverPi, which appears to be the only one selling it, to ask them if they accept PayPal, but got no reply. Does anyone know of a distributor in the EU or a different worldwide seller?
For storage I've been using Synology for a long time, first ds411+slim and now a ds620slim. I love the slim form factor, only 2.5" drives. It just works™
NAS works with phones, tablets and laptops with egregiously expensive, non-expandable storage.
On iOS/iPadOS, use SSH/SFTP to workaround business-model-challenged "Files" client.
I SyncThing my wife’s laptop to it. I serve a bunch of videos off of it to our AppleTV. All our photos are there.
I have a Time Machine backup of the main system drive (my wife uses TM as well). The whole thing is backed up to BackBlaze, which works great, but I have not had to recover anything yet.
I would like to run ZFS mostly to detect bit rot, but each time I’ve tried it, it pretty much immediately crushed my machine to the point of unresponsiveness, so I’ve given up on that. That was doing trivial stuff, have ZFS manage a single volume. Just terrible.
So now it’s just an ad hoc collection of mismatched USB drives with cables not well organized. My next TODO is to replace the two spinning drives with SSD. Not so much for raw performance, but simply that when those drives spin up, it hangs up the computer. So I’d like SSD for “instant” startup.
Not enough to just jettison the drives, but if opportunity knocks, I’ll swap them out.
Right now I have Plex running on a raspberry pi hooked up to an 8tb external HDD. Works fine, but I want to scale up to the 100-200TB range of storage, and it feels like the market is pushing me towards spending an inordinate amount of money on this. Don't understand why it's so expensive.
I just don't understand the reluctance of people to put storage in their actual computer(s).
(Not great for many tiny files or file contents changing a lot.)
That said, I do see a lot of value in low power systems like that of the author and run a couple. The way I do the energy calculation though is that I boot them off internal storage (MMC/SD) and then mount a root filesystem from the NAS. That way they don't have any storage power cost directly, they are easy to replace, and the power consumed by my NAS is amortized over a number of systems. giving it some less obvious economics.
[1] It is an iXSystems FreeNAS based system.
You can turn it into a NAS at any time by adding a mini pc or similar.
One of the nice things is that it has a full sync of my cloud storage, so I don't have think about backing up individual devices much any more: I create a file on my laptop, it syncs to cloud storage, then to the minipc. From that point on it's part of the regular nightly/monthly backup routine.
If I hit the 4TB limit it might be a pain, as I'm not sure it'll support an 8TB SSD.
Not to be relied on by itself, but it absolutely qualifies as a backup.
It's OK for corporate systems, but complete overkill for personal setups.
My personal files are ultimately a lot more important to me and much more irreplaceable than any files at work.
I'd never run a NAS without ZFS and ECC.
I now use ECC EVERYWHERE now. My laptop, my desktop, my little home server. All ECC. Because, ECC is cheap and provides a lot of protection for very little effort on my part.
There are various trade offs you can make depending on your filesystem, OS tooling and hardware which can mitigate risks in different ways. However non-ECC invites a lot of risk. How often are you checksumming your backups to validate integrity? It seeks unlikely you've had 0 memory corruption over 20 years, more likely you didn't notice it or your run a filesystem with tooling that handles it.
Can you (or someone) suggest a backup scheme? I have a 28TB NAS. Almost everything I've looked into is expensive or intended more for enterprise tier.
Are there options for backup in the "hobbyist" price range?
My home NAS has several roles, so I don't have to backup the full capacity. The family shared drive definitely needs to be backed up. The windows desktop backups probably don't, although if I had a better plan for offsite backups, I would probably include desktop backups in that. TV recordings and ripped optical discs don't need to be backed up for me, I could re-rip them and they're typically commercially available if I had a total loss; not worth the expense to host a copy of that offsite, too; IMHO.
You might do something like mirrored disks on the NAS and single copy on the backup as a cost saver, but that comes with risks too.
Buy the hardware to make a lightweight backup server. Make backups work with it. Take it to your friend's place along with a bottle of scotch, plug it in, and then: Use it.
Disaster recovery is easy: Just drive over there.
Redundancy is easy: Build two, and leave them in different places.
None of this needs to happen on rented hardware in The Clown.
Back in the days of DVDs, I used to backup my 20GB drive onto DVDs. I wonder if you could do something similar today but instead of a bunch of 4GB optical disks, you would use 4 x 8TB drives?
You could also take the awkward route and add one or two large drives to your desktop, mirror there, and back that up to backblaze (not B2).
The other suggestions you got for hot storage strike me as the wrong way to handle this, if you're considering $80 per year per TB for backups then just make another NAS.
So this might be on the higher end of the price range if you're using up all 28 TB uncompressed since that's about $168 per month though...
$0.004 Per GB/month
When running software RAID, memory errors could also cause data to be replicated erroneously and raise an error the next time it's read. That said if the memory is flaky enough that these errors are common it's highly likely that the operating system will crash very frequently and the user will know something is seriously wrong.
If you want to make sure that files have been copied correctly you can flush all kernel buffers and run diff -r between the source and destination directory to make sure that everything is the same.
It's probably way more likely to experience data loss due to human error or external factors such as a power surge than bad ram. I personally thoroughly test the memory before a computer gets put into service and assume it's okay until something fails or it gets replaced. The only machine I've ever seen that would corrupt random data on a disk was heavily and carelessly overclocked (teenage me cared about getting moar fps in games, and not having a reliable workstation lol)
https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-y...