Scuba and ITAD: When Industry Certifications Matter

In 1860, Jacque-Yves Cousteau invented the Aqua-Lung, an apparatus that would eventually evolve into modern scuba diving equipment. By the 1950s, there was enough commercial interest in the device that there came a demand for experts to help instruct and inform the public in safe diving. The technology was still very new however, and was constantly being updated, so no government or official certifying agency existed to codify the training. As a result, leaders in the industry formed organizations such as The National Diving Patrol, which led efforts to do just that. Today, scuba certifications are still handled by private organizations like PADI and NAUI. It might be surprising, but something very similar occurred in the data center infrastructure management.

For data center infrastructure management, the top organization for certifications is ADISA, the Asset Disposal & Information Security Alliance.

“The ADISA audit process is multi-layered including full audits, unannounced operational audits and forensic audits. This approach ensures ADISA Certified companies are constantly being kept on their toes. With companies suspended and others removed from the program, the certification is something companies work hard to achieve and even harder to maintain.”

ADISA’s ITAD certifications are managed by asset disposal experts that have been involved in all aspects of data policy, and have watched the industry grow and mature for more than 15 years. Just like scuba, ADISA manages its own academy, to help train “professionals involved in the process of ICT Asset Retirement or Recovery.”

Whether it’s underwater or in the data center, industry certifications matter because they’re created by professionals that know their field.

Data Security Strategies and the Chief Financial Officer

In the not-to-distant past, Chief Financial Officers only had to focus on their company’s finances. However, now the CFO is often distracted by data security concerns. In a recent survey by Duke University, “finance chiefs at nearly one in five companies that participated in the outlook survey admitted that hackers have breached their computer systems.” Fortunately, most of these companies are also taking steps to reduce those risks, including about half of those surveyed companies to increase employee training and hire data security experts. Most of the emphasis on data security is placed on protection from near-continuous denial-of-service attacks and critical data breaches,” according to Cam Harvey at Duke University. These are the kinds of attacks that can slow down business operations by impairing anything from a website to services hosted on a private server. In these cases, data security policies are reactive to meet the obvious needs of the business.

Companies like Facebook go above and beyond to protect their data, where Chief Global Security Officer Nick Lovrien explains, “[Facebook] had one data center and it was one building. Today we have about 20 data centers and each of those data centers is growing to about 20 buildings per data center.” These servers are where Facebook users store their photos, video, and lots of other types files. Facebook employs a variety of methods to keep these servers secure, from a buddy system in their facilities to focusing on “the lobby as the perimeter.”

Our expertise often comes in when companies are looking to maintain that security beyond deployment. While most strategies have a reactive stance, we’re finding that more and more, companies are paying attention to how data security is maintained throughout the hardware lifecycle. Companies have data centers with warehouses of hardware housing sensitive information, and we come in to help clear those warehouses of data as they come off-line.

It’s an exciting time for the data center products and services industry. The human race has launched a record number of data center environments over the past five years and many of those servers are coming to a life cycle end. More than that though, it’s encouraging to see an increased awareness for maintaining data security through the hardware lifecycle.

Impacts of the General Data Protection Regulation (GDPR) on Data-Driven Businesses

The General Data Protection Regulation (GDPR) was adopted on April 14th in the European Union. The GDPR deals with the data privacy of EU citizens, but also applies globally:

It also addresses the export of personal data outside the EU. The GDPR aims primarily to give control to citizens and residents over their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU. – Wikipedia

The goal of the GDPR is to protect all of the European Union citizens, which includes 28 countries in Europe, from privacy and data breaches. The measure is replaces the 1995 Data Protection Directive, but expands on its scope in a world that is increasingly more data-driven. Overall, the spirit of the GDPR is to give EU citizens control of their personal data. The law establishes new guidelines that come into effect May 25, 2018.

The biggest change from the 1995 Directive is that the policy now applies to all companies processing personal data residing in the Union, regardless of the company’s location. It will apply to all companies who process personal data, regardless of whether the processing takes place in the EU or not. There will also be severe consequences to those organizations that do not comply with the new privacy regulations. The rules apply to both controllers and processors, which includes operators in the “cloud.”

The impact to business in the United States is huge and will permanently change the way customer data is collected. The GDPR applies to all resident’s personal information, regardless of the geographical location of the business the customer is doing business with.  If a company offers any services or goods to, or monitors the behavior of EU residents, it must meet the new requirements, or suffer large penalties.

Under GDPR organizations in breach of GDPR can be fined up to 4% of annual global turnover or €20 Million (whichever is greater). This is the maximum fine that can be imposed for the most serious infringements e.g.not having sufficient customer consent to process data or violating the core of Privacy by Design concepts. – eugdpr.org

Ensuring information is secure while your data center is “In-Production” becomes the most top of mind security concern, but what happens when that data or the asset on which it resides reaches the end of life or the data center is no longer in production and performing an infrastructure refresh? This is often the most vulnerable stage for information security within the data center, particularly when the storage assets are stored or transferred from one place to another and not erased onsite. From our perspective, the best practice for data security is to have the shortest chain of custody for data center assets that hold data.

Aware of this heightened vulnerability; GDPR requires a documented chain of custody process for both storage assets and data. A clear and precise ILM-ALM chain of custody plan is what’s required. Start today by having a vendor experienced in the field to review your current lifecycle management and security plans today.

Best Practices in Data Center Decommissioning

In the past, we’ve admittedly focused primarily on the “challenges” associated with data center IT asset disposition compared to corporate ITAD. This has included the following viewpoints:

  • Very few companies seek out and use a specialized IT asset disposition solution for data center decommissioning despite the rationale that using a “general ITAD” solution is akin to seeing a family practitioner for a brain trauma.
  • Data center ITAD and general ITAD, like baseball and cricket, may appear to be similar, but they are distinctly different in ways that require unique strategies to win.
  • While the ITAD industry is considerable in size and is rapidly growing, it’s lagging for the most pressing area of need (in the data center) and for the highest priority objective of eliminating data security risk.
  • Data center decommissioning is wrought with operational challenges, IT asset security challenges, and onsite data security challenges that ITAD solutions created long ago for corporate IT are simply not cut out to handle.

Helping companies understand these inherent issues with ITAD, we believe, will help them make a more informed decision when selecting a data center decommissioning service provider and a data sanitization platform. But now, let’s shift gears and talk about the solutions associated with data center IT asset disposition.

The DO’s in Data Center ITAD

No matter how big or what kind of data center decommissioning project, from an IT refresh or upgrade, a data center consolidation, migration or shutdown, or an IT lease return or trade-in, follow these best practices to greatly improve your results:

  • Erase data onsite and in-cabinet, as data center IT equipment leaves production. For corporate ITAD, it’s not very common to erase data onsite. To some extent, this is for good reason. Corporate workforces are highly dispersed, often where few if any IT resources physically reside, making the technical ITAD process, and the verification of said process, a bit challenging. Secure transport of dispositioned hardware, while still costly, is more practical and economically feasible for corporate IT. In larger corporate environments and IT campuses though, erasing onsite over-network is recommended.In the data center, erasing data onsite should be a must do. And whenever possible, data should be erased with hard drives remaining inside servers and racks, also referred to as in-cabinet data erasure. When hard drives are removed from IT hardware, security risks immediately arise and costs invariably increase. Loose drives must be meticulously accounted for at all times, which unfortunately is not often the case. And don’t think for a second that people in and around data center IT aren’t keenly aware of the value of private information on the black market. If you can’t account for every loose hard drive and what data is on each, then you’re at risk to breach.

    The best practice solution is to erase data as IT equipment comes offline and out of the data center production environment. This process requires network-based data sanitization scalability to efficiently erase mass-capacity data center storage. This ITAD process should include a Certificate of Sanitization demonstrating successful, sector-verified erasure for each serialized hard drive.

    When erasing entire server racks with hard drives intact, the process should also identify any hard drives that failed to erase, including the serial number and exact location. This information is used to ensure that those drives can be quickly removed and destroyed before transport. As our data erasure expert Matt Mickelson has discussed before, if you cannot locate and pull every hard drive that failed to erase, then you must handle every hard drive in that rack as if it contained data, which defeats the purpose of erasing data in-cabinet.

    It’s additionally important that the data erasure software document the parent-child relationships between the racks, the servers, and the internal hard drives, especially if data is not being erased onsite. Otherwise, data-bearing hard drives could go missing in-transit and may not be discovered until it’s too late, if ever. Teraware not only captures the host-to-disk relationship, but writes an encrypted, random signature to the drive so that successful erasure can be uniquely verified upon receipt at the ITAD facility.

    Of course, hard drives commonly fail while in production at the data center and when they do, those drives must be removed. ITRenew developed the Break-Fix Terabot to efficiently erase loose drives, and to physically crush any that cannot be erased on the spot, with a Certificate of Destruction provided for auditing purposes. This type of data erasure utility should be a fixture in every data center, purposely built for the different sizes, capacities, and volumes of hard drives deployed in the data center.

  • Focus on data erasure and reusable hardware yields at the data center. If you’re someone that’s involved in ITAD or any area of IT, think of all the metrics used to measure performance and compliance. Do you have one, or have you even heard of one, that measures data erasure “yield”? Or, in other words, how often the data erasure process results in a verified successful disk erasure?Have you ever stopped to think what a reasonable hard drive erasure yield should be? In our experience, we’ve found most data erasure tools used for ITAD have about a 50-60% yield. The rate will vary based on a variety of factors including the health, size, and type of hard drive as well as the different storage technology. Teraware, on the other hand, has an ITAD industry-best 95+% data erasure success rate overall. Our rate consistently comes in around 99% on mass data center decommissioning projects, when we are erasing entire racks at a time over network without ever removing the hard drives from the server cabinet or their chassis.

    If you’re not already measuring yield (and I personally know of very few companies that are), then demand that your ITAD vendor provide yield statistics, including a breakdown on which drives are failing to erase. Some data erasure tools are completely unable to erase solid-state (SSD) and high-capacity drives, among other newer storage technology. Unfortunately, those are the types of drives commonly purchased for data centers 2-3 years ago and are beginning to come out of service in mass quantities. If your data erasure yield on those is 0%, you’ll want to know sooner than later.

    When data erasure fails, there’s a ripple-down effect that includes additional cost to destroy drives: lost remarketing and RMA value, lower IT sustainability and, most importantly, inferior protection against data breach. Monitoring yield rates will help you pinpoint where problems exist and initiate corrective action.

  • Choose a data center ITAD specialist as an IT hardware-remarketing partner. As our data center remarketing expert Kyle Roche has discussed, data center IT equipment typically holds much greater residual value at decommission than corporate IT, and the components are sometimes worth more on the secondary market than the entire unit. The end user marketplace for data center IT equipment is also much smaller, which forces most ITAD companies to utilize broker channels, particularly when volumes are high. A data center specialist will understand these challenges and have capabilities to capitalize on the remarketing opportunity.For example, in the data center, it’s common for ITAD projects to be put out to bid. Typically, service fees associated with IT asset recovery, data erasure, and e-waste recycling are buried in the bid price. However, few if any reputable ITAD service providers or remarketing brokers will take on the full liability that’s associated with the bid model. Therefore, many ITAD companies decrease their bid upwards of 70% to allot for uncertainties in the IT aftermarket. Since many remarketing products are subject to commodity constraints, meaning their price fluctuates based on supply and demand, prices go up and down daily as if it were its own stock market. This fluctuation materializes itself as liability, and one wrong move can spell disaster for the ITAD service provider. So, IT remarketers will adjust their bid prices accordingly. However, if those ITAD providers sell high, the customer does not see a penny more.

    A true data center IT remarketing specialist will instead utilize a revenue share model between the company and ITAD provider because, when executed properly, it will deliver greater return for both them and the customer. Under revenue share, the ITAD customer will receive a pre-determined consignment share (typically 75%) of the remarketing proceeds, after ITAD costs have been deducted. For this model to be more rewarding for both parties, the ITAD vendor must be intimately familiar with data center IT equipment, the niche secondary market of buyers, and the economics of selling mass quantities of like, high-value products.

    A true data center IT remarketing specialist will also know the optimal velocity to release data center IT products into the secondary market so as not to deflate prices, and when it’s better to sell product as whole units or parts. The ITAD provider will have developed specialized, direct-to-end-user channels to move the high-volume, high-value data center gear at or above fair market values, so that broker channels aren’t necessary. And they will also have the financial wherewithal to wait out market conditions to sell equipment at its highest price point. Many larger ITAD companies are publicly held or have major investment partners. Often, they will sell off client equipment as needed to satisfy sales revenue projections. This benefits the provider and its investors, but is not always in the best financial interests of the customer.

    A data center ITAD specialist will also use software designed for data center. Yield rate will dictate how many drives are preserved for remarketing and will allow servers to be sold as fully functional systems, netting a much higher resale price.

    Specialized data erasure software will also discover full configuration details of the equipment onsite. Companies are often unable to provide this information upfront, and if the IT equipment is put out to bid, offers will be lower to account for the configuration uncertainties. Even in a consignment model, if you don’t know exactly what you have and the approximate value, you are left to accept whatever your ITAD vendor says its worth after you’ve already incurred the logistics expense to transport this massive equipment. With Teraware, we can often have IT equipment sold before it hits our docks because we know exactly what we will be receiving, plus the software puts equipment through stress-testing to help ensure extended secondary life. If any repairs or upgrades are necessary, we can also requisition parts in advance so sales can be turned quickly.

Data Center Decommissioning and Enterprise ITAD: Two Different Playing Fields

Baseball and cricket. Two seemingly similar bat-and-ball games with a common objective: to score runs while preventing the other team from doing the same. However, the two games bear quite a few differences the closer you look at them: the fields on which they are played, the equipment used to play the games and the rules to which the games are played—ultimately affecting the strategies which are employed to win.

The same holds true when contrasting the differences between ITAD strategies for the data center and ITAD strategies for corporate IT. As I pointed out in my previous blog, the two are seemingly very similar in that IT equipment is refreshed and must be dispositioned properly, with the common objective of preventing sensitive data from being exposed or harmful contaminants from entering the e-waste stream while maximizing value recovery. Yet, like baseball and cricket, the closer you look at the ITAD strategies that should be employed to win, the more obvious are the differences.

ITAD Strategy: Data Center and Corporate IT are Different Playing Fields

Data center decommissioning and enterprise ITAD for corporate IT are uniquely different in the following ways:

  • Hyper diversity. Modern data centers are hyperscale, which characterizes not only the size of the data center but also its network architecture and approach to hardware. Data centers are designed to provide a single, massively scalable compute and storage architecture that can encompass upwards of a million servers or more, which can equate to several million hard drives throughout the enterprise. Conversely, many Fortune 500 corporate IT environments are hyper-dispersed, meaning they are the polar opposite of a hyperscale architecture. Where most larger organizations will have fewer than 10 data center locations, the most distributed business environments can have millions of company-owned IT assets and bring-your-own-devices scattered across hundreds or even thousands of corporate and home office locations. Adding to the locality challenge? Companies that operate “data center in a box,” or pre-manufactured containerized facilities, which can be located just about anywhere, including in some cases on the roof of existing facilities.
  • Physical security. While some corporate office locations can have fairly tight physical security, most require no more than keycard access to enter even the most restricted areas of the facility. Very few of the corporate offices that I’ve seen approach the minimum physical security standards common to data centers, let alone the military-grade protection in ultra-high security data center environments. For example, how often in your workplace must armed guards escort guests at all times?
  • Physical space and downtime. Normally with enterprise IT, equipment can be disconnected from the corporate network and replaced by IT staff before employees show up for work the next day. Old equipment can often be moved to an IT campus or staging area, or into a warehouse or even temporary storage, before it is palletized for asset recovery and disposition. Data centers are not typically afforded such luxuries. Every square foot of space possible is used for production and generating revenue for the data center operator—or prepping for production—which means old equipment must be quickly decommissioned to make way for the new equipment. And while shipping insecure hardware by way of high-security logistical means may not be so taboo for corporate IT, the idea of shipping dirty servers is quite different. So, there’s the not-so-small challenge of efficiently erasing mountains of server data to avoid any one of several undesirable outcomes: 1) occupying costly floor space for days or even weeks if the data sanitization process is not sufficiently scalable and efficient, 2) physically removing and destroying high volumes of high-value hard drives, or 3) shipping via high security at great expense and vulnerability. Many companies occupy colocation (colo) data center facilities, where equipment, space, utilities, bandwidth, physical security and other services are rented, which can introduce unique requirements to the ITAD process.
  • Hardware disparity. Data centers are full of a relatively homogenous mass of technology: servers and racks, storage systems, switches and other networking gear, commonly of the same or similar make and model. Corporate IT encompasses a much broader spectrum of technology, brands and configurations. Data center hardware is also much larger, heavier and less mobile than your typical corporate IT hardware, while storing a much greater density of data on a more complex spectrum of storage technology.
  • Lifecycles. Cloud computing, big data and everything-as-a-service have driven insatiable demands for greater compute and storage in the data center, while conversely, these technological innovations have reduced the same demands at the desktop. Furthermore, the data center stands to realize greater cost and utility savings from upgrading hardware, not to mention performance and productivity gains. As a result, technology lifecycles are shortening in the data center while lengthening in many other areas of corporate IT.
  • Break-fix and RMA. While all types of IT will break and require maintenance, data center hard drives present unique challenges—and untapped financial opportunity—due to their high failure rate. Upwards of 15% or more of data center drives will fail or be pulled for predictive failure, most of which are under warranty and can be returned for RMA credit or replacement. Given the sheer volume of drives in the data center and their high value, RMA should be a core ITAD consideration for data center, yet is commonly out of scope for enterprise ITAD.

So, when it comes to IT asset disposition during data center decommissioning, there’s a huge disparity between the number of locations to be supported by the enterprise ITAD strategy, and how dispersed the locations are. Among other ITAD considerations, this can greatly impact your data security strategies.

Much different are the physical security requirements for gaining access into the data center, and then navigating the facility—especially if a modular data center on top of another building. Downtime ramifications are distinctly different as are the mix, physical aspects and storage capacity of the individual IT assets. All of which will affect ITAD strategies for onsite services that in turn, will impact data center production, depending on how efficiently data and equipment is decommissioned.

And, finally, IT lifecycles in the data center are heading in the opposite direction, with most corporate IT at end-of-life and destined for e-waste recycling while most data center IT has significant reusable value remaining, whether through resale, redeployment, parts harvesting or RMA credit. Therefore, value recovery opportunity is much greater in the data center. But the market for used data center gear is very different than corporate IT. You don’t just throw a rack up for sale on eBay as you might a refurbished laptop.

In the next several blogs, we’ll dive head first into the operational, security, remarketing and environmental challenges that these ITAD strategy differences present. Thus, establishing why it’s so crucial to utilize a specialized data center decommissioning solution compared to traditional ITAD.

Before doing so, however, the natural question is this: If the playing fields for data center decommissioning and enterprise ITAD are so different, why is it that vendor solutions and client processes today are all pretty much the same? In my next blog, we’ll look back at what’s transpired in the IT asset disposition industry during the past decade to lead us to where we are: with standardized ITAD solutions designed for IT of yesteryear. Adding to this challenge is an industry with a small collection of ITAD service providers controlling the clear majority of asset volumes that are unmotivated to evolve for the modern-day needs of the data center. Until now.

Data Center Decommissioning: A Different Kind of ITAD

When it comes to IT asset disposition (ITAD), or the process of recovering and safely disposing of obsolete or unwanted IT equipment and business technology, in theory data center decommissioning should be no different than decommissioning any other type of enterprise IT asset. Read more

Data Erasure Certification: Don’t Just Take My Word for It

Whenever we think of best practices for any procedure, let alone one that is used to protect sensitive hard drive data from unauthorized access, there are two fundamental elements that validate the procedure:

  1. Metrics
  2. Check and balances

Every Certified Six Sigma Black Belt I know would jump at any opportunity to rant why a procedure without metrics is total garbage. And then would talk your ear off how to use a set of techniques and tools for process improvement.

Don’t get me wrong. Six Sigma has always been one of my most respected certification programs as it not only helps in constructing solid processes and procedures, but also allows degrees of auditable checks and balances. And in my line of work, its core purpose is particularly relevant. By Wiki definition, Six Sigma “seeks to improve the quality of the output of a process by identifying and removing the causes of defects and minimizing variability in business processes.”

When you’re talking about techniques and tools used to erase data on servers in your data centers, there’s really nothing more important than removing the causal defects and minimizing variability in the data sanitization process. As I will present at the IAITAM ACE Conference next week in Las Vegas, where data security breach is at stake that costs millions, the goal must be to eliminate risk altogether. And certainly removing defect and variability from the data erasure process is critical to risk elimination.

Technical program managers, engineers, analysts and compliance auditors are all data driven roles. We love to read data and come to conclusions. The importance of data mining is really to find truth. Data can still be manipulated, but if produced by qualified sources, the data speaks for itself.

Data security compliance is certainly an area where we must do our due diligence and ensure the checks and balances of eliminating security threats are valid.

Independent Data Security Certification Examiners

Internal quality control checks will always be a good practice for any organization to ensure processes are followed, and that data-bearing IT devices are handled according to the defined procedures. For certification of the data sanitization process, the necessity of using a third-party is paramount for validating the solution providers’ claims.

Utilizing independent organizations that leverage security research and forensic labs, such as ADISA, will help to ensure the certification is legit. As a data security professional and self-proclaimed technical process freak, what I value in the ADISA certification process is the technical separation whereby the certification is challenged by another independent organization whose core charter is the development and advancement of cyber security defense practices. That’s right, ADISA does not in-source the validation process, but instead hands this critical function over to information security experts at the University of South Wales.

Whenever ITRenew receives an ADISA certificate for Teraware data sanitization, they carry true merit. That’s because each ensures we have forensically erased the data as claimed. ITRenew just recently completed another round of ADISA testing for Teraware and we now actively hold 17 certificates, which is more than all other data erasure products combined.

ADISA Threat Matrix: A Solid Workflow

ADISA and the University of South Wales have developed a product claims testing process for the data sanitization of electromagnetic and solid-state hard drives. Within the methodology, there is a threat matrix and associative test level that shall determine a degree for which a product claims is challenged.

RISK LEVEL THREAT ACTOR AND COMPROMISE METHODS TEST LEVEL
1 – Very Low Casual or opportunistic threat actor only able to mount high-level non-invasive and non-destructive software attacks utilizing freeware, OS tools and COTS products. 1
2 – Low Commercial data recovery organization able to mount non-invasive and non-destructive software attacks and hardware attacks. 1
3 – Medium Commercial computer forensics organization able to mount both non-invasive/non-destructive and invasive/ non-destructive software and hardware attack, utilizing COTS products. 2
4 – High Commercial data recovery and computer forensics organization able to mount both non-invasive/non-destructive and invasive/ non-destructive software and hardware attack, utilizing both COTS and bespoke utilities. 2
5 – Very High Government-sponsored organizations or an organization with unlimited resources and unlimited time capable of using advanced techniques to mount all types of software and hardware attacks to recover sanitized data. 3
Table: ADISA Threat Matrix

Utilizing the risk level assessment chart, the process has defined corresponding test levels by which product claims are tested against.

ADISA Test Level 1

A base-level erasure testing that provides assurance a drive has been sanitized to a logical level. For electromagnetic hard disk drives of today, this is usually sufficient coverage for devices that have never contained classified materials, financial information or vital records. Solid-state drives using a Test Level 1 should be more of a checkpoint-based certification and not the sole certification level of a product. This is due to the underlying technology used in an SSD drive that hides storage capacity from the erasure tools. In other words, it should be sufficient to use level 1 testing for SSD erasure if the sanitization tool already has a higher-level test. Teraware will utilize Test Level 1 as a compatibility checkpoint for like devices.

ADISA Test Level 2

The more favorable forensic-level test is through a process of chip-off attacks where the NAND components are dismantled from the drive itself and flash components are read back in its rawest form. The level 2 testing focuses on the full native capacity of the hard drive that is hidden from the user-provisioned areas.

ADISA Test Level 3

At this time, a level 3 test is, in my opinion, unobtainable. The threat matrix defines the scenario of unlimited resources, and the process consists of exhausting all available penetration test techniques including ones that have not yet been created. However, it does provide a goal for further development.

Comprehensive Data Erasure Certification Testing

For a good data sanitization certification plan, it is critical to think about what you want to get out of the certification process and what your target customers would expect to see in a certification result.

For ITRenew and Teraware certifications, it was important for us to look at the detailed process and capability as the cornerstone of the certification process. As a public form of certification, ADISA was that cornerstone for us. On a private level, we also have several other forensic studies that are commissioned specifically by our Fortune 500 customers through their infosec organizations and their third-party forensic laboratories. So, we have the flexibility and experience to satisfy all testing criteria.

Regardless, the process will always include level 2 testing for forensic-grade certification and level 1 for vendor compatibility. It is important that the devices selected cover various interfaces and drive technologies. The certification as a whole represents a sampling of the sanitization platform’s capability from consumer-grade products through enterprise-grade IT equipment.

ITRenew’s latest round of Teraware certifications covers:

  • Drive interfaces: SATA, SAS, Fibre Channel, and NVMe.
  • Drive types: hard disk drives (HDDs) and solid-state drives (SSDs).
  • Technologies: SLC/MLC/TLC/3DNAND flash, 4K Advanced Format for SAS and SATA, helium filled HDDs, shingled media recording (SMR), NVMe admin command protocols, ATA Sanitize Feature Set, SCSI Sanitize Feature Set, 12Gbs SAS, 6Gbs SATA, PCI based flash storage and high-capacity drives (8TB and 10TB).

This product testing diversity helps ensure data erasure compatibility on the latest and greatest technology deployed in your data centers. Below is a complete list of ADISA certifications for Teraware data erasure.

VENDER FAMILY MODEL CAPACITY INTERFACE RISK LEVEL TEST LEVEL
HP ProLiant SSD MO0200FCTRN 200GB SAS-SSD 1,2,3,4 2
HP ProLiant SSD EO0400FBRWA 400GB SAS-SSD 1,2,3,4 2
Intel 320 Series SSDSA2BW160G3D 160GB SATA-SSD 1,2 1
Intel 320 Series SSDSA2BW160G3L 160GB SATA-SSD 1,2 1
Intel X25-M SSDSA2M160G2GC 160GB SATA-SSD 1,2 1
Samsung PM800 Series MMCRE28G5MXP-0VB 128GB SATA-SSD 1,2 1
Samsung PM810 Series MZ7PA128HMCD-01 128GB SATA-SSD 1,2 1
Samsung SS410 Series MCBQE32G5MPP-0V 32GB SATA-SSD 1,2 1
SanDisk Lightning Series LB150S 150GB SAS-SSD 1,2,3,4 2
HP/HGST HGST Ultrastar SSD800MM MO0400JDVEU 400GB SAS-SSD 1,2,3,4 2
Intel 535 Series SSDSC2BW480H6 480GB SATA-SSD 1,2 1
Samsung 750 EVO Series MZ-750500BW 500GB SATA-SSD 1,2 1
Intel DC P3600 Series SSDPEDME400G401 400GB NVMe-SSD 1,2 1
Intel 540s Series SSDSC2KW480H6X1 480GB SATA-SSD 1,2 1
EMC/STEC EMC Enterprise Flash Drives Z16IFE3B-200 200GB FC-SSD 1,2,3,4 2
Seagate Archive HDD ST8000AS0022 8TB SATA-HDD 1,2,3,4 2
HGST Ultrastar He10 HUH721010AL4200 10TB SAS-HDD 1,2,3,4 2
Table: Active Teraware ADISA Product Claims Test Certificates as of April 2017

When evaluating the ADISA claims test results, a few points of caution. First, data sanitization tools which have gone through the ADISA claims process and focus only on the base-level test or a single interface type may not give you the assurances you need. Second, be sure testing is relevant, including whether it’s been conducted against the current version of their software and not merely on drives that were first certified for SSD erasure more than two years ago.

About the Author
As the Director of Product Development, Matt Mickelson is responsible for development of ITRenew’s data sanitization product line. This includes Teraware, an enterprise-grade data sanitization and asset management software platform, and Terabot, a line of do-it-yourself data erasing machines that are powered by Teraware software. The center’s core charter is to intimately understand – and stay ahead of – the various technologies that storage manufacturers develop and customers deploy in order to ensure compatibility with Teraware data center decommissioning.

What Does Data Center Growth Mean for IT Asset Disposition?

For 20 years, I have dedicated my career to the IT industry and the pursuit of growth and innovation in technology has kept me here. Starting out as a quality engineer at Iomega Corporation (who remembers Zip drives?) taught me one very important lesson: that growth and innovation isn’t constant, its variable. Like a biological ecosystem, one innovation can emerge, grow tremendously and be replaced in moments by another. This concept is best characterized by an economics term called creative destruction. Creative destruction comes with the huge growth and shifts in IT spending. From infrastructure to entertainment, emerging technologies can wipe out legacy solutions, and sometimes entire technological platforms.

Today, a massive IT growth-shift is occurring toward the data center. In fact, it’s been brewing for years. Widescale adoption of business cloud services and the new human machine ultra-connected world has created explosive demand for offsite storage and compute. While some IT segments are flat or on the decline, IT spend is steadily increasing year on year in the data center. Industry analysts such as 451 Research have estimated an ongoing global data center market growth of more than 12% through 2021.

The past and future growth projection has given rise to an entire group-set of technologies and services within the data center environment. I was fortunate to spend the past four-plus years driving sales for one of the largest global data center providers, Digital Realty. The time at Digital helped me appreciate and better understand the unique nuances and potential within the data center environment. I observed sales and engineering data center veterans orchestrate many layers of changing technologies, competing to be the most efficient. Numerous hardware, software, service and connectivity providers are engaged in a variable arms race to provide the fastest, coolest, most efficient and economical solution. All are riding the global data center market growth wave.

Many of the partners I worked with touch the data center IT lifecycle at different points. From pre-production infrastructure providers to in-production managed service providers (MSP’s) and post-production recycling companies; all are looking to increase their offerings and broaden their span of the IT lifecycle. Each data center lifecycle segment is unique but all are strong revenue-generating events with ample room for technological and efficiency improvement. I believe the latter half of the IT lifecycle (in production/end of production) is one of the most overlooked, less commoditized spaces with tremendous market opportunity. The growth of data center during the past decade means the number of IT deployments coming to the end of the lifecycle is market multiplier opportunity. Furthermore, many legacy datacenters are more than 10 years old and less efficient than their modern-day counterparts, and so are in great need of IT refresh to boost efficiency.

What excites me about ITRenew is the opportunity for technology partners to take advantage of the growth in data center deployments while expanding their own offering into the full data center IT lifecycle. Now, all IT solution providers who touch the data center will be able to add an array of onsite data sanitization verification and data center decommissioning services to their offerings. Even better, the ITAD solution developed at ITRenew was created in collaboration with the world’s leading cloud companies. That means we have products like Teraware and Terabot that are specifically designed to service the information security, IT asset management, value recovery and e-waste recycling needs of the in-production data center and at the point of hardware decommissioning. As ITRenew’s VP of Corporate Strategy Aidin Aghamiri writes in his blog Data Center Decommissioning: A Different Kind of ITAD, data center is unique and requires a specialized solution. Aidin will also be giving a presentation on this topic at the IAITAM Spring Ace Conference on May 2 in Las Vegas.

As the new VP of Global Sales for ITRenew, I look forward to bringing my industry experience to bare and helping others in the present and future data center industry boom. Traditional data center service providers—from break-fix remote hands, infrastructure migration and managed services companies—now can become even more strategic with their accounts by providing a proven data erasure process and software platform. These systems deliver 100% auditable data and asset security, while preserving nearly all hard drives for resale. And in the process reducing onsite data erasure and IT decommissioning time by 80%, helping to maintain peak levels of efficiency and uptime when retiring data center equipment from production.

With all of this in mind, I’m looking forward to the tremendous opportunities available to our clients, solving the data security and asset management problems in the data center, and helping to deliver the most effective solutions in the industry.

3 Vulnerabilities of SSD Data Sanitization

Since the inception of NAND flash-based storage devices, there have been a number of studies and many debates about the integrity of flash including: what is the appropriate method of destroying data stored on NAND flash memory components?

With flash, there are variables which obstruct a simple approach of overwriting the medium with a basic or even complicated pattern as a means of forensic data sanitization. For this reason alone, adopters of flash who mandate forensic-grade data sanitization should reject a data erasure vendor’s claims of any solid-state drive (SSD) data sanitization method that is based on an overwrite technique.

Whenever we think of a solid-state disk, we will think of a storage disk with a SATA or SAS interface. This interface is known as the device’s frontend and it communicates with the controller using the ATA or SCSI protocol. The SSD device also contains a backend interface that is used to communicate with the NAND flash components. NAND flash also has its own command protocol which is governed by the Open NAND Flash Interface (ONFI) Workgroup. When the host system issues a WRITE command to the SSD, it will utilize the ATA or SCSI WRITE command to the drive. And it will then utilize the drive’s firmware and Flash Translation Layer (FTL) to interpret and translate the operation to the OFNI defined command, PROGRAM CYCLE.

With OFNI, there are only a handful of commands (operational code) that are defined as mandatory for which we will always assume to be implemented as they are, well, mandatory. The common commands utilized during an effective SSD sanitization tool are: BLOCK ERASE, PROGRAM, READ and READ STATUS.

COMMAND OPCODE
Block Erase 60h/D0h
Change Read Column 05h/E0h
Change Write Column 85h
Program 80h/10h
Read 00h/30h
Read ID 90h
Read Parameter Page ECh
Read Status 70h
Reset FFh
Table: Mandatory list of OFNI-defined operational codes for accessing NAND flash storage.

For a perfect SSD sanitization, the first step would be to put together the full inventory of NAND addressing/layout by the means of the READ ID and READ PARAMETER PAGES. At this stage, we are only concerned with the NAND NOT identified through the manufacturing defect map. From this point, a BLOCK ERASE is issued to all blocks outside the manufacturing defect map. This will address all the provisioned, un-provisioned, flagged for remap and remapped (post manufacturing map) blocks. Since the BLOCK ERASE function is defined to include the verification, there is no need to issue the READ command to the blocks or pages.

The challenge with achieving the perfect sanitization of an SDD is that it’s not possible to bypass the FTL and send the native flash commands directly to the NAND flash. What is exposed on the frontend is an ATA or SCSI interface and a logical block address (LBA) range and not the full physical NAND address map.

SSD Data Sanitization Vulnerability #1: Multiple Overwrites

As mentioned earlier, the device’s firmware and FTL will translate the ATA/SCSI commands to OFNI-defined commands, but addressing is not translated. The device’s flash manager will add complexity for translated WRITE commands when issuing the PROGRAM cycles to the flash as physical addresses are never guaranteed due to the flash manager’s wear leveling algorithms, which are put into place to preserve the integrity of the drive by always rotating in younger blocks and rotating out the older dirty blocks.

How then is an overwrite method going to address this? Some vendors will say if you overwrite a drive multiple times, then it will get around the wear leveling. However, this is not accurate and will leave a drive vulnerable to “chip off” attack, which involves removing a NAND flash memory chip from a device and copying its data. Some SSD devices will minimize the internal overhead of wear leveling and rotate a block after X amount of program cycles. If you know what X is, then overwriting the drive X times will reduce the risk, but still not eliminate the risk.

SSD Data Sanitization Vulnerability #2: Uncompressible Data

There is a message being spread throughout the industry that if you use “uncompressible” data to overwrite the SSD, it will sanitize the SSD device. The term uncompressible means no compression. If someone develops a patent for overwriting an SSD device with “uncompressible” data, which uses a statistical measurement to prove that the pattern employs an acceptable threshold window of randomness, then that window is what you need to be concerned about. The outcome is not an absolute “no compression” and since it is not absolute, this leaves room for characters in the pattern to repeat. And if you have repeating characters, then you have a compressible pattern.

The other concern is this: if an uncompressible pattern is required as the means of sanitizing an SSD drive, then why is it necessary to issue a firmware-based erasure command after? It is because the overwrite passes are not capable of forensically sanitizing an SSD device, where the firmware-based command is capable.

SSD Data Sanitization Vulnerability #3: Any Standalone Overwrite

The perfect sanitization method outlined earlier did not use any PROGRAM commands to the NAND flash. The reason is the PROGRAM and READ flash commands have a grave limitation in that they do not cover flagged blocks or defect maps. NAND flash is designed to have high performance. Over time, as the gates within the cells wear out, they become corroded to where the cycle times are impacted.

A page could have user data, and within that page the overall block could become flagged where the next PROGRAM cycle will rotate the block out of the provisioned area. At this point, it will become a bad block; one that will contain user data and one that will be missed by the PROGRAM command. The only way to process the blocks will be through the BLOCK ERASE command. The BLOCK ERASE command will not check any status registers of post-manufacturing defect maps and will automatically erase the data contents of the block.

Conclusion

When choosing a data sanitization tool for SSD devices, it is important to partner with a solution provider that does not focus on using an overwriting technique as a means of sanitizing the device. Security compliance should be based on an absolute forensic-level data sanitization technique that eliminates risk rather than focusing merely on minimizing risk.

About the Author
As the Director of Product Development, Matt Mickelson is responsible for development of ITRenew’s data sanitization product line. This includes Teraware, an enterprise-grade data sanitization and asset management software platform, and Terabot, a line of do-it-yourself data erasing machines that are powered by Teraware software. Matt also directs all aspects of the ITRenew Innovation Center, an R&D facility and data security think tank. The center’s core charter is to intimately understand – and stay ahead of – the various technologies that storage manufacturers develop and customers deploy in order to ensure compatibility with Teraware data center decommissioning.

An Overview of Data Security and IT Asset Management

Savvy data center mangers care about data security. A key part of data security involves hardware asset management and erasure technology. This can involve tracking serial numbers for hundreds of storage devices every year and physically accounting for break-fix replacements. Oftentimes, aggressive launch schedules and complex technical operations overshadow forethought on implementing a solid data center ITAD program. While not complex, a well-defined and disciplined IT Asset Disposition platform and operating plan is essential to closing physical data security breaches within the data center.

The optimal data center hardware life-cycle is contiguous, with feedback loops at every step, to provide you the greatest level of data protection and asset accountability:

  • Pre-production: collaborating with OEM’s to make erasure effective before technology is deployed into live production. This means the ITAD software company and software platform is compatible with your data center hardware specifications. ITRenew collaborates with hardware manufactures every year to ensure compatibility with its platform called Teraware.
  • Deployment: includes asset compatibility testing before implementation, discovery of equipment configurations, capture of serial numbers and identification of at-risk storage devices. Your IT Asset Disposition platform provider should perform this during their initial setup or walk your team through it.
  • In-production data centers: account for the majority of all deployments. Most do not have a closed-loop IT asset tracking and erasure platform, however it’s never too late to implement one. There’s no harm in looking at options and the benefit gains will likely pay for itself, not to mention the recognition and peace of mind that comes from having a physical data security ITAD plan in place. You can contact ITRenew today and request a Teraware demo.
  • Decommission: reconcile variances and monitor job progress, sector-level verify wiped drives and destroyed or secure transport of un-sanitized drives. This is best left to an outsourced provide like ITRenew. Not only will they certify sector level erasure, but they’ll also remove your equipment and ensure remarking at top rates. Most data center providers are surprised to learn how much their decommissioned hardware is worth, and actually enjoy recapitalizing those funds into the next deployment.
  • Recommission: reconcile asset repository, remove company identifiers before remarketing and compliance reporting to close the loop. Contact ITRenew to learn how many companies they do this for today. Their global footprint of recommissioning facilities will ensure your hardware is ready for resell.

Most data center decision makers are choosing to outsource this portion of their operations to specialists. Benefits include enhanced data security, reduced liability and focus-efficiency improvements by internal staff. Whether you decide to outsource or insource, make sure you choose a platform and provider who is NIST complaint and HIPPA or SOX if your company requires it. ITRenew and the Teraware ITAD software platform has more industry certifications than any other provider.