Until a couple of years ago, many IT administrators and managers were concerned about the risk of losing valuable data because of a sudden failure. That’s why it took manufacturers a long time to convince the public that SSDs are safe to use, even when handling sensitive data.
A NAND Flash chip based SSD is a totally different storage media than the traditional hard disk drive which saves its data on a magnetic plate. It consists of an electronic controller and several storage chips. A hybrid drive – also called SSHD – consists of both storage technologies: A normal magnetic hard disk drive as well as storage chips.
What are the benefits of SSDs?
The main benefit of electronic chips for storage is that they are much faster than HDD with a spindle inside. That is due to the fact that a normal HDD consists of many mechanical parts and rotating discs. Also, the re-positioning of the read/write head takes much more time than just pushing data through electronic interfaces. Additionally, SSDs have a very short access time, which makes them perfect for being used in environments where real-time access and transfer is a necessity.
What are the disadvantages of SSDs?
The downside of SSDs with the NAND Flash-based chips is that they have a limited life span by default. While normal HDDs can – in theory – last forever (in reality about 10 years max.), an SSD lifespan has a built-in “time of death.” To keep it simple: An electric effect results in the fact that data can only be written on a storage cell inside the chips between approximately 3,000 and 100,000 times during its lifetime. After that, the cells “forget” new data. Because of this fact – and to prevent certain cells from getting used all the time while others aren’t – manufacturers use wear-levelling algorithms to distribute data evenly over all cells by the controller. As with HDDs the user can check the current SSD status by using the S.M.A.R.T. analysis tool, which shows the remaining life span of an SSD.
Estimating terabytes written (TBW)
Usually, manufacturers give an estimate with the so-called terabyte(s) written (TBW)– especially when it comes to enterprise SSDs, but also for consumer versions. Because of the fact that by using Wear-Leveling the data will be distributed evenly over all cells, this figure is supposed to tell how much data can be really written in total on all cells inside the storage chips and over the whole life span.
A typical TBW figure for a 250 GB SSD lies between 60 and 150 terabytes written. That means: To get over a guaranteed TBW of 70, a user would have to write 190(!) GB daily over a period of one year (in other words, to fill two-thirds of the SSD with new data every day). In a consumer environment, this is highly unlikely.
Samsung example
Samsung states that their Samsung SSD 850 PRO SATA, with a capacity of 128 GB, 256 GB, 512 or 1 TB, is “built to handle 150 terabytes written (TBW), which equates to a 40 GB daily read/write workload over a ten-year period.” Samsung even promises that the product is “ withstanding up to 600 terabytes written (TBW).” A normal office user writes approximately between 10 and 35 GB on a normal day. Even if one raises this amount up to 40 GB, it means that they could write (and only write) more than almost 5 years until they reach the 70 TBW limit.
SSD lifespan even longer than promised
The most recent estimates put the age limit for SSDs around 10 years – though the average SSD lifespan is shorter. A joint study between Google and the University of Toronto tested SSDs over a multi-year period. It was found that the age of the SSD was the primary determinant of when an SSD stopped working. The study also found that SSDs were replaced about 25% less often than HDDs.
Remember: In the case of data loss from SSDs, the best idea is to contact a professional data recovery service provider. When it comes to a physical fault, there is no possibility for a user to recover or rescue their data themselves. Also, when the controller or storage chip is malfunctioning, the attempt to recover data with a specialised data recovery software tool is even more dangerous. It can lead to permanent data loss with no chance of recovering the data ever again.
If they last that long, where are the dangers?
Even though the average SSD lifespan is longer than originally expected, using this storage medium still poses a serious threat: Recovering data from failed SSDs is still more challenging than HDDs for data recovery service providers because getting access to the device is often difficult. When the SSD controller chip is broken, access to the device and the storage chips is impossible. The solution to this problem is trying to find a functioning controller chip that is identical to the bad one and to remove and exchange it with the identical one to get access. What sounds quite simple is a difficult task in reality. This applies also for trying to access data from faulty storage chips. In many cases, data recovery experts like those from Ontrack are able to reset data. In the last few years, Ontrack developed a lot of special tools and processes to master these challenges and have successfully recovered lost data.
Remember: In the case of data loss from SSDs, the best idea is to contact a professional data recovery service provider. When it comes to a physical fault, there is no possibility for a user to recover or rescue their data themselves. Also, when the controller or storage chip is malfunctioning, the attempt to recover data with a specialised data recovery software tool is even more dangerous. It can lead to permanent data loss with no chance of recovering the data ever again.
For enterprise IT asset managers, as well as third party recyclers and the ITADs that support them, it is important to understand SSD technology, why SSD erasure is challenging, and the importance of choosing an effective erasure product with detailed reporting capabilities.
Our latest report investigates these challenges and the solutions you should implement to get around them.
Get your FREE report here.