Storage systems have become their own unique and complex computer field and can mean different things to different people. So how can we define these systems? Storage systems are the hardware that store data.
For example, this may be a small business server supporting an office of ten users or less - the storage system would be the hard drives that are inside of that server where user information is located. In large business environments, the storage systems can be the large SAN cabinet that is full of hard drives and the space has been sliced-and-diced in different ways to provide redundancy and performance.
The ever-changing storage system technology
Today's storage technology encompasses all sorts of storage media. These could include Write Once Read Many (WORM) systems, tape library systems and virtual tape library systems. Over the past few years, SAN and NAS systems have provided excellent reliability. What is the difference between the two?
-
SAN (Storage Area Network)units can be massive cabinets - some with 240 hard drives in them! These large 50+ Terabyte storage systems are doing more than just powering up hundreds of drives. These systems are incredibly powerful data warehouses that have versatile software utilities behind them to manage multiple arrays, various storage architecture configurations, and provide constant system monitoring
-
NAS (Network Attached Storage)units are self-contained units that have their own operating system, file system, and manage their attached hard drives. These units come in all sorts of different sizes to fit most needs and operate as file servers
For some time, large-scale storage were out reach for small businesses. Serial ATA (SATA) hard disk drive-based SAN systems have become a cost-effective way of providing large amounts of storage space. These array units also offer virtual tape backup systems - literally RAID arrays that are presented as tape machines; thereby removing the tape media element completely.
Other storage technologies such as iSCSI, DAS (Direct Attached Storage), Near-Line Storage (data that is attached to removable media), and CAS (Content Attached Storage) are all methods for providing data availability. Storage architects know that just having a 'backup' is not enough.
Speedy obsolescence
In today's high information environments, a normal nightly incremental or weekly full backup is obsolete in hours or even minutes after creation.
In large data warehouse environments, backing up data that constantly changes is not even an option. The only method for those massive systems is to have storage system mirrors - literally identical servers with the exact same storage space.
3 things to consider when choosing a system
Careful analysis of the operation environment is required. Most would say that having no failures at all is the best environment - that is true for users and administrators alike! The harsh truth is that data disasters happen every day despite the implementation of risk mitigation policies and plans.
When reviewing your storage needs, consider:
-
What is the recovery turn-time?What is your client's maximum time period allowed to be back to the data? In other words, how long can you or your client survive without the data? This will help to establish performance requirements for equipment
-
Quality of data restoredIs original restored data required or will older, backed-up data suffice? This relates to the backup scheme that is used. If the data on your storage system changes rapidly, then the original data is what is most valuable
-
How much data are you or your client archiving?Restoring large amounts of data will take time to move through a network. On DAS (Direct Attached Storage) configurations, time of restoration will depend on equipment and I/O performance of the hardware
Unique data protection schemes
Storage system manufacturers are pursuing unique ways of processing large amounts of data while still being able to provide redundancy in case of disaster.
Some large SAN units incorporate intricate device block-level organisation, essentially creating a low-level file system from the RAID perspective. Other SAN units have an internal block-level transaction log in place so that the control processor of the SAN is tracking all of the block-level writes to the individual disks. Using this transaction log, the SAN unit can recover from unexpected power failures or shutdowns.
How could recoverability be improved?
Some computer scientists specialising in the storage system field propose adding more intelligence to the RAID array controller card so that it is 'file system aware.' This technology would provide more recoverability in case disaster struck, the goal being the storage array would become more self-healing.
Other ideas along these lines are to have a heterogeneous storage pool where multiple computers can access information without being dependent on a specific system's file system. In organisations where there are multiple hardware and system platforms, a transparent file system will provide access to data regardless of what system wrote the data.
Other computer scientists are approaching the redundancy of the storage array quite differently. The RAID concept is in use on a vast number of systems, yet computer scientists and engineers are looking for new ways to provide better data protection in case of failure. The goals that drive this type of RAID development are data protection and redundancy without sacrificing performance.
You may not have terabytes or petabytes of information, yet during a data disaster, every file is critically important.
Avoiding storage system failures
Though you may not be able to prevent a disaster from happening, you may be able to minimise the disruption of service to your clients.
There are many ways to reduce or eliminate the impact of storage system failures. For example you can add redundancy to primary storage systems. Some of the options can be quite costly and only large business organisations can afford the investment. These options include duplicate storage systems or identical servers, known as 'mirror sites'. Additionally, elaborate backup processes or file-system 'snapshots' that always have a checkpoint to restore to, provide another level of data protection.
Experience has shown there are usually multiple or rolling failures that happen when an organisation has a data disaster. Therefore, to rely on just one restoration protocol is short-sighted. A successful storage organisation will have multiple layers of restoration pathways.
We have heard thousands of IT horror stories of initial storage failures turning into complete data calamities. In an effort to bring back a system, some choices can permanently corrupt the data.
4 ways to minimise loss after a disaster
There are several risk mitigation policies that storage administrators can adopt that will help minimise data loss when a disaster happens:
-
Offline storage system:Avoid forcing an array or drive back on-line. There is usually a valid reason for a controller card to disable a drive or array, forcing an array back on-line may expose the volume to file system corruption
-
Rebuilding a failed drive:When rebuilding a single failed drive, it is import to allow the controller card to finish the process. If a second drive fails or go off-line during this process, stop and get professional data recovery services During a rebuild, replacing a second failed drive will change the data on the other drives
-
Storage system architecture:Plan the storage system's configuration carefully. We have seen many cases with multiple configurations used on a single storage array. For example, three RAID 5 arrays (each holding six drives) are striped in a RAID 0 configuration and then spanned. Keep a simple storage configuration and document each aspect of it
During an outage: If the problem escalates up to the Original Equipment Manufacturer (OEM) technical support, always ask “Is the data integrity at risk?” or, “Will this damage my data in any way?” If the technician says that there may be a risk to the data, stop and get professional data recovery services involved
More information:
Server Data Recovery