I always assumed that having a stable 40°C is better than a drive constantly swinging between 20°C and 40°C, so I am surprised that the article only mentions alerts on reaching a high threshold.
Maybe dividing the drives roughly in "higher than average variability" and "low variability" and then looking at the AFR for this subset can show some relation. Of course as the AFR for many drives is already quite low, the effect might be too small to distinguish from noise.
On the topic of temperatures: Have you run an analysis whether a drive increasing in temperature (or maybe even decreasing) compared to its base line and "neighbors" results in a higher chance of failure?
I'd be interested if you have data to compare similar drives with stable tempretures against diurnal tempreture cycling.
I'd imagine you have fairly constant data centre type environments though which would confound analysis for such questions.
My simple rule is that all drives suck, and always have good backups.
Well, my main reason was that WD decided that just failing "naturally" after a few years wasn't enough, but that a drive having been on for 3 years should be considered the same as "failing" (communicated through WDDA), which led to Synology adopting that for a while. Not sure what the current state is of that, but I intend to swap drives when they fail, not when they turn 3.
The ones that never failed me were any drives made by Quantum using SCSI interface. 5 drives, zero failure over 10+ years. But those were slower and cooler running units.
It's like the student who copies the 'A' student's exam. They'll purposefully get a couple of the answers wrong to avoid suspicion.
Toshiba got the 3.5" division. WD got the 2.5" division, anti-trust divesture (just 3 companies left): https://upload.wikimedia.org/wikipedia/commons/8/87/Diagram_...
Combine that with their lackluster reputation in solid state as of late and I probably won't buy their HDDs until Seagate one day gives me the "WTF are you even selling?" rigmarole too.
So then I start scanning the drive. The partition table was deleted, but it was full of data, most of it encrypted. It was also badly fragmented, so it was likely in some type of array. What I could recover in cleartext implied that it had been spun up for thousands of hours, in a Backblaze datacenter. It wasn't conclusive enough to go to Backblaze about it, so I just returned it to Amazon. I probably did a zero pass on it first, can't remember.
Encrypted or no, if that was a backblaze drive, they were disposing of drives with customer data still on them. I'm not surprised someone tried to pass it off as new on Amazon, that scam is old hat. I was very shocked to see the data still intact though.
Who cares if there was "customer data still on them" if it was encrypted? One of the nice things about encryption is that you don't have to worry about wiping drives.
Would you be comfortable publicly posting the encrypted database from your password manager? Or encrypted copies of your financial information? Go on, drop a google drive link if you're so confident
> In Q3, six different drive models managed to have zero drive failures during the quarter. But only the 6TB Seagate, noted above, had over 50,000 drive days, our minimum standard for ensuring we have enough data to make the AFR plausible.
Didn't realize they can last so long.
After that it went into my server at home. It was used for various things. At this point it’s spent the last 5 years as the disk my DVR records to. (Because 5 years ago I was expecting it to die any day and didn’t want anything more important on it…) So it’s being continuously written and rewritten 24 hours a day, 7 days a week.
It’s now about 17 years old and has spent almost the entirety of that time powered on. It’s been packed up and sent on two cross country moves. Still kicking.
I have a number of WD Green 1.5TB drives that are nearly as old and still in daily use in the same server.
Maybe I’ve just had great luck, but I’d be more surprised by a drive dying sooner than eight years.
I have some ~140GB SAS drives HP branded that are probably of similar age based on the capacity, and they still work… but again, they haven’t exactly been active for the last 15 years straight.
I used to like Samsung hard drives personally, Lacie used them in their rugged series and I found them to have pretty low failure rates. Seagate bought out Samsung's disk drive business in 2011 apparently. I guess Samsung saw the future of SSD's?
Though Backblaze is pretty good at what it does and you can set your own encryption key, if that's the concern.
[1] https://help.backblaze.com/hc/en-us/articles/360038171794-Wh...
Oddly, SpiderOak has been a background go-to for years and always worked smoothly for backing up everything I select in a wholesale way, keeping a fairly clear record of what was saved, removed or moved, and adjusting immediately to any file changes or deletes I do. The SO interface is shitty and often freezes, and lacks many basic features like being able to see file sizes or scrolling through long file lists easily, but at least overall, I can quickly and easily see when backups are happening, how they're being done and what's being saved. Also, for restoring files, it's surprisingly fast despite a reputation held by many that it's slow.
Hard disagree. The jankiness of their software was what made me cancel my sub after several years. Firstly it doesn't follow the OS date and number formatting. It's a minor thing, but it's so annoying having to parse the dumb M/D/Y format and comma thousands separator etc (being da-DK). It's not a deal breaker, but on the other hand, it's such a low-hanging thing that most other software gets right immediately.
But far more importantly, the BB client would some times just decide to re-upload several hundred gigabytes of data that I know for sure didn't change, which makes me wonder if it's just the client being retarded or if the data got lost server-side. And it takes absolutely forever to detect USB harddrives being plugged in. And its log files will grow to absurd sizes, and you're not allowed to purge them or the client will become brain damaged. And one time I needed to do a restore, it took literal days for BB to prepare it and I had to get support involved. I feel I just can't trust the BB stack, the client being the weakest link by far, and backup I can't trust is worthless.
How to calculate low and high AFR ?
L = without power disable, 0 = with
What? Isn't your Data Center supposed to be temperature controlled, where the A/C has a setpoint which it's keeping the entire environment to within a degree?
Being able to tell what season it is based on your HDD SMART temperature (armchair expert here) sounds bad.