Five years of data shows that SSDs are more reliable than HDDs in the long run.

Backup and cloud storage company Backblaze has published data comparing the long-term reliability of SSDs and traditional spinning hard drives in its data center. Based on data gathered since the company began using SSDs as boot drives in late 2018, Backblaze cloud storage evangelist Andy Klein released a report yesterday, showing that the company’s SSDs are failing at a much lower rate. than its hard drives as the drives age.

Backblaze has been publishing drive failure statistics (and related comments) for years; hard drive reports reflect the behavior of tens of thousands of storage drives and boot drives from most major manufacturers. The reports are comprehensive enough that we can at least draw some conclusions about which companies make the most (and least) reliable drives.

The sample size for this SSD data is much smaller, both in number and variety of drives tested – mostly 2.5-inch drives from Crucial, Seagate and Dell, with little Western Digital/SanDisk representation and no data from Samsung. travels at all. This makes the data less useful for comparing relative reliability between companies, but it can still be useful for comparing the overall reliability of HDDs to that of SSDs doing the same job.

Backblaze uses SSDs as boot drives for its servers rather than data storage, and its data compares these drives to hard drives that were also used as boot drives. The company says these drives provide storage for logs, temporary files, SMART statistics, and other data in addition to booting—they don’t write terabytes of data every day, but they don’t just sit around and do nothing after the server has also booted up.

During the first four years of life, SSDs fail less frequently than HDDs in general, but the curve looks basically the same – a few failures in the first year, a bounce in the second year, a slight decline in the third year, and another rise per year. four. But once you reach the fifth year, the hard drive failure rate starts to rise rapidly, from 1.83% in the fourth year to 3.55% in the fifth year. On the other hand, Backblaze SSDs continued to fail at about the same failure rate as a year earlier.

This data—both the reliability gap between the two and the fact that HDDs begin to fail earlier than SSDs—makes intuitive sense. All other things being equal, you would expect a drive with lots of moving parts to have more points of failure than a drive with no moving parts. But it’s still interesting to see how this case is made with data from thousands of disks over several years of use.

Klein suggests that SSDs “can hit a wall”and begin to fail at a faster rate as their NAND flash chips wear out. If that were the case, you would see lower capacity drives start to fail at a faster rate than higher capacity drives, since a drive with more NAND has higher write tolerance. You will also likely see many of these drives start to fail around the same time as they all do the same job. Home users who constantly create, edit, and move large multi-gigabyte files may also notice that their drives wear out faster than in the Backblaze use case.

For anyone who would like to see the raw data that Backblaze uses to create their reports, the company makes it available for download here.

CDN CTB