IN THIS ARTICLE
The conventional approach to choosing enterprise storage is being challenged. New acquisition models are rapidly emerging backed by strategic shifts from the major storage vendors. These developments offer the potential for dramatic changes in the established pricing model, but require the user to make informed choices. This article sets out the new dynamics and offers guidance in getting the best from these new opportunities.
The IT industry is shifting to a new technology platform for growth and innovation. IDC calls it the 3rd Platform, built on mobile devices and apps, cloud services, mobile broadband networks, BigData/analytics, and social technologies. Millions of users are connected to each other through mobile broadband with access to millions of applications and cloud services, which is contributing to the continuing exponential growth of data. IDC projects that the digital universe will reach 40ZB by 2020, exceeding all previous forecasts. IDC believes that only 0.5% of the world’s data is actually being analyzed, hence the importance of technology and talent to extract the hidden value from Big Data.
Cloud computing. This is a new delivery and service model that will shape IT spending over the next several decades. It entails shared access to virtualized resources over the Internet. Public IT cloud services spending will reach$47.4 billion in 2013 and nearly $108billion in 2017, with a five-year compound annual growth rate (CAGR) 5%— five times the growth of the IT industry as a whole. These services will drive 17% of IT product spend by 2017, up from 8% in 2012.
Mobile devices. The “bring-your-own-device” (BYOD) trend, where corporate end users prefer to use their own devices to access information and create corporate content, has been growing rapidly in the corporate sector, such that many workers now have two or more mobile devices to access the corporate network. Access to software functions “as a service” that were once available only through licensed software deployed in the data center will continue to fuel public cloud services and storage capacity.
Big Data with real-time analytics. Information overload and the time and cost off in the right information are significant issues for many organizations. These factors present an opportunity that can be addressed with Big Data platforms that enable nearly continuous, real-time analysis of data from a wide variety of sources. This new opportunity will drive new partnerships and specialized solutions into the market.
Machine-generated The information output from machines and sensors is now aggregated to process an even higher level of information and analytics, further driving exponential growth.IDC forecasts that machine-generated data will grow from19% of the digital universe in 2012 to over 40% of the 40ZB that will be created in total in 2020.
Social The growth of services such as Twitter, LinkedIn, and Facebook can easily connect people and create new content that can be shared within a defined group or beyond. The volume of data created by social media is substantial and, combined with the frequency of creation, will challenge IT systems trying to unlock additional value.
While the amount of data to be stored continues to grow, IT organizations’ storage budgets do not increase proportionally with the growing needs for storage space. Limited datacenter space and power, growing operational costs, and data management complexity create additional pressure for providers to look for more efficient ways of using their existing storage assets.
NAND Flash Storage
Flash is having a profound impact on storage system architectures. When SSDs were first introduced, they were simply used to replace existing HDD sand speed up the existing infrastructure. However, there are a multitude of different performance and capacity requirements depending on the enterprise workload. Therefore as the technology has evolved, numerous approaches have emerged, depending on the operating environment, to capitalize on the benefits associated with solid-state storage.
Considerations When Adding Flash Storage
This variety of available solutions means that SSD and flash technology will be utilized in multiple architectures within the datacenter to deliver on both short-term and long-term business requirements. To provide the best match for the performance and capacity necessary for the various types of workloads, the following architectures have emerged:
Server based. A server-based architecture provides the lowest latency because the flash or SSD is closest to the processor and application infrastructure from anI/Opath In general, server-based flash storage cannot be shared between servers. This approach can be targeted specifically to a single application for acceleration with minimal investment.
In the Flash is typically deployed as a caching layer between the host and storage layers. Network caching is used for applications when there is in adequate flash capacity on the server to achieve the required high cache hit rates.
Within the storage array, there are two options:
Hybrid array. Hybrid arrays combine HDDs with SSDs in conjunction with intelligent data placement software or policies. In these solutions, SSDs can be leveraged either as persistent storage(written to the drive and can survive a power cycle) through automated tiering technology or as a cache layer within the In either case, a relatively small amount of NAND flash (up to 15%) is used to accelerate the system’s performance beyond traditional HDD-only solutions.
All-flash All-flash arrays are purpose-built enterprise-grade storage devices using only flash-based SSD as the storage media. These arrays contain no traditional HDDs, but they leverage persistent flash storage in IO-intensive environments such as OLTP databases.
Advances in semiconductor technology and the growing use of NAND flash in the consumer market have pushed NAND flash-based SSDs in to the enterprise as a cost-effective solution to address the server/storage performance gap by complementing HDD-based storage infrastructures. When coupled with software functions such as intelligent caching and automated storage tiering, these techniques have made SSD deployment easier and solid-state storage more usable across the enterprise.
SSDs comprise a semiconductor nonvolatile memory (typically NAND flash), an advanced device controller, and an interface to connect to the host. These devices are transforming the entire computing industry as a result of inherent benefits, such as:
Cost savings. Enterprise NAND flash is a more expensive storage media compared with HDDs on a $/GB basis. Yet, when solid-state storage is integrated into a system with storage optimization technologies such as compression and de duplication, storage vendors can lower the acquisition cost and total cost of Also, $/IO/GB is optimized with the use of solid-state storage. To achieve comparable levels of IO performance, traditional HDD arrays must leverage large numbers of drive spindles, with the associated cost, power, and floor space and reliability issues.
High performance. SSD scan achieve multiple GB per second of random data SSDs offer high I/O operations per second (IOPS) performance. For example, a single SSD can provide in excess of30,000 IOPS — an order of magnitude improvement over the fastest HDDs. In addition, SSDs provide a more consistent I/O response time because of the predictable access time and high bandwidth.
Greater By leveraging SSDs in an intelligent manner, storage vendors aim to make their storage solutions more efficient. For example, by placing the most frequently accessed data on high-performance SSDs and less frequently accessed data, or cooler data, on the most cost-effective HDDs, storage vendors can increase efficiency. SSD-based solutions typically offer reduced power consumption, cooling, and floor space requirements than an HDD-only alternative.
HDD’s Future Role: Storing the Cold Data
Over the past few years, the need for lower-cost storage has come from consumer-generated unstructured content, cloud services and data depots, and a mobile and social world that demands and creates data at the touch of a screen. Much of the data may never be read after it is stored, but the desire is to have it available in case it is requested.
IDC believes that HDDs will be the technology platform to provide the industry with this lower tier of “cold” storage. Flash can be seen as the enabler of the lower tier because it is expected that a properly sized layer of flash storage can handle the majority of IOPS in a given commercial workload. If flash can provide that capability, then IT managers will seek to correctly size their flash layer with estimates ranging from 2%to15%of a datacenter’s total storage infrastructure and then push as much data to the least costly layer. Cost, capacity, and application performance requirements will ultimately determine what proportion of the storage will be cold and what proportion still needs to be stored on performance-optimized HDDs.
The key success factor will be whether or not a cold storage platform can meet the cost requirements without slowing the operations of the datacenter and corresponding applications because of a slower than typical performance where latency may be measured in seconds.
Enterprise and Consumer-Grade Storage
Storage infrastructure for enterprise users must deliver sustained performance levels far in excess of what is required for single-user consumer applications —typically 100% utilization, 24hoursaday, and seven days a week. A storage hardware failure can disrupt many of the IT users and customers that depend on the reliable provision of IT services. Consumer-grade storage is optimized for low cost and a typical workload utilization of 10%–20% for 40 hours a week. A storage hardware failure at client level will be annoying but will generally affect only the individual user.
Enterprise-and Consumer-Grade HDDs
To address the differing priorities of these use cases, HDD and SSD vendors develop solutions that are specifically tailored for each environment. Enterprise-class HDDs commonly use the following methods to achieve the required reliability and performance.
Heavier grade mechanical components. The enterprise system will not only support operating system and application tasks locally but will also support client requests 100%of the During off-peak times the enterprise system may be scanning the hard drives for defects or errors, performing system backup and other maintenance tasks. Enterprise workloads create greater wear on bearings, motors, actuators, and platter media, which generates additional heat and vibration.
Higher performance. Enterprise-class HDDs generally have mechanisms that allow faster data access and These features include faster spindle speeds, more powerful actuator magnets, denser magnetic media, and faster processors with more cache memory.
Recovery from read errors. In the case of a read error, consumer-grade drives will typically attempt multiple retries before returning an error that the block was unreadable. During this time the drive may become unavailable to the operating system and application. Long drive recovery timeouts are not acceptable in an enterprise environment because multiple users can be affected and because RAID systems do not tolerate an unresponsive drive. Therefore enterprise-class hard drives have a short command timeout value. When a drive has a problem reading a sector and the short time out is exceeded, the drive will respond by attempting to recover missing data from the sector checksum if If that attempt fails, the drive will notify the controller and the controller will attempt to recover using redundant data on other disks in the RAID group and remap the bad sectors. The shorter timeout allows the recovery effort to take place while the system drives continue to support system disk access requests by the operating system.
Resilience to Vibration from fans and other nearby hard drives can be transmitted to a drive through the system chassis, causing read/write errors if the head is pushed off-track. Enterprise-class drives usea more sophisticated compensation for vibration by sensing the vibration motion of the drive, and by sensing head position and track alignment. The drive can then react with additional actuator strength or wait for the spindle motor to bring the target media location under the head again so that it can reattempt access. Enterprise-class drive designs include a closed loop feedback system between the magnetic head and the spindle(s) to sense vibration anomalies and react accordingly.
End-to-end data integrity. In an enterprise-class drive, transmitted data is always accompanied by parity or checksum This allows data transmission errors to be detected, and in some cases corrected or retransmitted. In contrast, consumer-grade drives do not usually incorporate error correction code (ECC) in system memory or drive memory buffers. Enterprise-class systems use error detection at every stage within the system, including ECC support in system memory and drive memory to increase data integrity.
Variable sector size. Consumer-grade drives use a fixed 512 byte sector with parity data enough for the controller to detect data errors in the sector but not enough to rebuild missing or corrupted Enterprise-class drive have variable sector sizes that allow the controller to set the data size per sector and use the remaining space for a checksum that allows corrupted data to be recovered. The controller can detect the error and remap the drive using spare available sectors.
Drive reliability is commonly quantified as the number of hours mean time between failures (MTBF). Through the use of the techniques above, the reliability of enterprise-class HDDs is of the order of1.2 million hours based on a duty cycle of 100% for24x7 operation at45°C.In contrast, consumer-grade drives are typically 700Khours MTBF based on 20% duty cycle and 5×8 hour operation at 25°C.
Toshiba’s Storage Strategy
In IDC’s view, Toshiba’s ability to design and manufacture drives of the highest resilience and reliability has been one of the company’s biggest contributions to storage technology. Toshiba has become the most successful supplier of 2.5in. drives for demanding mobile environments such as laptops, automotive, and industrial and enterprise systems.
Toshiba invented NAND flash memory and has significant investments in end-to-end fabrication facilities, including a $4 billion joint venture with SanDisk for a new 16–17nm facility at Yokkaichi in Japan.
Toshiba’s research and manufacturing resources mean that it is the only company in the world that is able to offer a complete portfolio of HDD and NAND flash solutions for enterprise and consumer use cases. Toshiba is the major investor and key supplier to Violin Memory, which has pioneered enterprise-class flash storage solutions globally since 2010.
Toshiba is committed to both HDD and flash storage, emphasizing their complementary nature. Martin Larsson, Vice President, Storage Products Division, Toshiba Electronics Europe, comments: “HDD and flash memory technologies will continue to coexist, taking advantage of their complementary characteristics. Toshiba is the only vendor that covers the spectrum of HDD, SSD, and NAND flash memory. Inspired by our vision of Total Storage Innovation, we aim to be the leading storage solution provider in the evolving cloud and Big Data era.”
Toshiba will offer HDDs for storage of large volumes of “cold” data. At the same time, it will strengthen its offerings of enterprise SSD by exploiting its proprietary NAND flash memory technology and know-how in controller and firmware design in enterprise HDDs. Future storage products will provide additional reliability and security (encryption) functions.
CHALLENGES/ OPPORTUNITIES FOR TOSHIBA
Toshiba has unique capabilities in the HDD and SSD market, but challenges can be identified:
Flashy marketing is generally seen as a lower priority. Yet in today’s confused market, the vendors that shout loudest can win attention from the New market entrants, often backed by significant venture capital, are in a hurry to win deals and build awareness. The clear risk for Toshiba is that it will be out-marketed by smaller and louder competitors that seek to negate or neutralize its commercial and technical edge.
Enterprise-class storage arrays that leverage consumer-grade NAND flash are being offered by Toshiba/Violin Memory’s competitors. The company faces a challenge in promoting the benefits of enterprise-grade flash, when some customers may consider a” good enough” solution.
The current trend toward vendors acting more as IT architectural advisors to their enterprise clients presents both an opportunity and a challenge for any It affords a vendor the opportunity to showcase its broad skill set and can act as a catalyst for a better customer experience. Working collaboratively can be a challenging, but rewarding, path for both the vendor and the end user.
The storage market today is arguably changing more rapidly than at any other time in its history. New market entrants with innovative architectures are working to subvert the incumbent vendors, which in turn are accelerating their development cycles to keep up with new advances. Software-defined storage is moving closer to mainstream adoption, with fundamental implications for the vendors and users alike.
Memory-based storage systems and SSDs have been deployed in enterprises for many years in environments that demanded the best performance regardless of the cost. However, IDC believes that because of the declining cost of NAND flash media and system-level advancements, solid-state technology will become pervasive across the enterprise and complement existing storage systems. Today’s business and technology leaders should look to solution providers that offer a comprehensive portfolio to meet diverse enterprise workloads and those that have a strategic framework to help customers determine the optimal flash solution for their specific needs.