Just because I’m a technical professional, who’s been accused a few times of speaking in very technical terms, doesn’t mean I don’t enjoy dabbling as a storyteller whenever I can. So, in telling the story of what makes security in the data center particularly challenging, and thus the security-related need for using a data center ITAD specialist, I set the table by establishing these points in my previous two blogs:
- When it comes to protecting data on data center IT equipment that is being decommissioned, any risk mitigation strategy short of eliminating risk is deficient. Especially when you consider that strategies which merely mitigate data security risk actually cost more and are less socially and environmentally responsible.
- To eliminate data breach risk during IT asset disposition, you must first have 100% accountability for every serialized hard drive and data-bearing component. Then, you must erase data with verification to the sector level. And, you must do both onsite, before IT equipment ever leaves the data center facility.
I’ll now conclude the security storyline with the challenges of onsite data security in the data center.
Onsite Data Security Challenges of Data Center ITAD
As I stated in an earlier blog, erasing data onsite in the data center according to best practices is the only way to eliminate data security risk and ensure 100% auditability at decommission. However, that doesn’t mean it’s easy. After all, we’re talking about massive amounts of data, stored on a complex array of technology, that must be erased not only effectively, but efficiently given the productivity impact of data center downtime. The following are the biggest challenges that operators run into when erasing data onsite in the data center:
- Traditional data sanitization tools fail often when wiping data center IT equipment. Most data sanitization tools weren’t engineered to handle the high-volume, high-capacity needs in the data center. Therefore, with conventional data sanitization tools, the data wiping process may fail as often as it succeeds and be so time and labor intensive it simply becomes prohibitive—despite the optimal security outcome.
Additionally, from what our customers have told us, other data erasure solutions they have used in their data centers have issues with:
- Scaling out for the number of concurrent IT assets that can be processed.
- Locating IT assets.
- Requiring external network connections that are not possible in high security data center environments.
- Requiring software deployments through USB keys that become difficult to manage.
- Requiring advanced training and many onsite technicians to complete the data sanitization jobs.
It’s why we developed Teraware as the first enterprise-grade data erasure platform to be infinitely scalable with ITAD industry-best erasure success. In fact, for most large-scale onsite data center decommissioning jobs that I’ve managed, we consistently achieve ~99% data erasure success.
- The non-uniformity of data wiping tools.Contributing to the above problem is the fact that, while ITAD vendors will have erasure tools that can erase data center equipment, there is no industry standard they must conform to. Therefore, tools could be DOS, Linux, or Windows-based. These data sanitization tools may be only capable of processing a single device at a time, and may not have a manufacturing tool or provide any verification step. There should be a common onsite data security procedure that utilizes a common tool regardless of the device type. I emphasize the word can because not all data erasure is created equal. Some data sanitization tools are better than others and, like any tool that supports a mission-critical function, there must be process that conforms with best practices. This can include:
- Erasing data onsite as equipment comes out of production
- Reconciling every serialized drive against asset management records and erasing over network
- Erasing hard drives inside data center racks
- Verifying to the sector level and generating a serialized Certificate of Sanitization
To put the data volume into perspective, let’s use a recent colocation data center decommissioning job that ITRenew managed. The client was decommissioning 3,865 servers that contained 22,915 hard drives. In total, we managed the onsite erasure of 44 petabytes of data for this project alone. One petabyte is equal to 1,024 terabytes (TB). According to Computerworld, 1PB of storage is equivalent to all of the content in the U.S. Library of Congress—multiplied by 50. So, for this single, average-sized decommissioning job, we’re talking about erasing all the data in the U.S. Library of Congress some 2,200 times over.
- In addition to the data that’s stored on internal hard drives and local boot drives, physical servers may contain customer-specific configuration data. This can include hostnames, asset tag/ID information, service contract information, and internal network configuration information, all of which can be stored in the BIOS (primary and backup ROMs), on local SD/CF cards, or other devices.
- Network switches will contain configuration information and TFTP server volumes for imaging IT assets.
- A majority of detailed network infrastructure information along with redundant paths can be extracted from the configuration of backbone network equipment and leveraged to find backdoor vulnerabilities.
- High-end data storage arrays will contain configuration information, cache volumes that can easily be missed during the decommissioning process, and data volumes, which are usually the most critical storage-bearing device. These systems are designed to handle thousands of concurrent users, and they process financial transactions, accepting active users with active records. They are the most challenging to decommission due to the proprietary hardware and software interfaces. While many of the manufacturers will offer a limited data wiping function, these have been proven to be time-consuming and ineffective for audits.
- Mid-range storage arrays will house less-than-mission-critical data, but data valuable enough that is should be destroyed.
- Network-attached storage (NFS) and ZFS storage appliances are storage systems specialized for serving files.
Data center IT equipment will also contain both hard disk drives (HDDs) and solid-state drives (SSDs) that utilize a variety of drive interfaces including SATA, SAS, Fibre Channel, and NVMe. The equipment will also deploy the broadest range of storage technology including NAND and PCI-based flash storage, shingled magnetic recording (SMR), high-capacity, 4k advanced sector format and helium filled. It’s difficult to erase data consistently well across such a wide spectrum. This is the precise reason why we put Teraware through the ADISA certification on the most challenging lineup of storage media: to ensure data erasure compatibility for all IT equipment found in the data center, not just a limited testing sample. As I explain in Data Erasure Certification: Don’t Just Take My Word for It, certification is also important to ensuring erasure compatibility.
However, most data centers lack a well-defined RMA process. As a result, failed hard drives are commonly stored in cages with lax IT inventory management and chain-of-custody controls. Eventually, most RMA eligible hard drives end up being shred, which not only destroys the opportunity to return drives under warranty for manufacturer credit, but also increases data security risk. The process of shredding hard drives lacks the systematic verification that data erasure provides. There is also the risk that data can potentially be recovered from shredded hard drive fragments, or that drives could be missed (accidentally or intentionally)—with data remaining intact.
RMA is a challenge no matter how big the data center infrastructure. Even a medium-sized cloud operation can have approximately 100,000 hard drives in total across its infrastructure. That means between 10,000 and 15,000 of the hard drives will fail, and when they do will get pulled from production. That’s a significant volume of hard drives to manage, which makes it so surprising that few companies have a controlled RMA process. To learn more about data center RMA opportunity and challenges, read the ITRenew blog RMA: The Daily Grind.
Finally, let’s not forget the challenge of erasing data from hard drives that have failed in production or have been deemed problematic for one reason or another. Data erasure “yield,” or the wipe rate success, will be lower when processing drives that have health issues. However, with Teraware we still consistently achieve 75-85% yield on failed hard drives. Of course, if failed hard drives cannot be erased onsite, they will likely be destroyed, eliminating the opportunity to receive RMA replacement or credit. Read this case study to learn how we were able to recover 60% of the hard drives that previously failed to erase using another industry-leading data wiping tool.
With Teraware integrated into our data center decommissioning service, ITRenew generates a report of every drive that has failed to erase and includes the precise location of each drive down to the slot number. For example, the report will include the colo/container, rack serial number/floor tile, the server number within the rack, and the slot number within that server where the failed drive resides. In large-scale data center decommissions, locating and removing failures can be extremely time consuming. This unique capability allows failed in-cabinet drives to be quickly identified and processed for destruction, with the added benefit of cutting technician labor time considerably. Otherwise, not being able to locate the hard drives with erasure failures would mean that all 5,000 hard drives should be handled as if they contained data.
Protecting Sensitive Hard Drive Data That Cannot be Erased
As mentioned earlier, data centers are filled with IT equipment that store all different kinds of data in all different types of storage components. Unfortunately, not all of these data-bearing components can be sanitized through data erasure protocols.
A data center ITAD specialist properly experienced in onsite data security services will know which data center IT assets contain storage components that cannot be erased and precisely where these components are located, whether they store potentially sensitive data, and exactly how to protect this data.