Overall Data Center Impact
The final area to explore is the overall impact these new memory based approaches will have on the data center. In particular we want to keep the major themes in mind of spindle reduction, consolidation, and a need to reduce power, space, and cooling.
Memory in all of its various shapes and forms will be responsible for keeping our data centers from spiraling out of control with disk drive proliferation. Coupling high-speed memory with low-cost, high-capacity drives delivers both the performance and capacity requirements for modern data centers in an efficient footprint.
But as with all data center transformations, this will not happen overnight. And there will be stages of deployment as data center managers find the best way to enhance their current infrastructure with new memory-based solutions.
The most visible aspect of the data transformation will be the spread of silicon (in terms of processing cores and memory) to complement the storage layer. Of the three primary layers in the data center – servers, networks, and storage – the storage component is the last to rely on physical moving parts. Some belie this as “rotating rust” but I prefer to think of it as the most cost effective means of retaining high-capacity persistent data. But today’s robust application servers demand more in terms of IOPS and bandwidth than what a typical disk-based storage system can provide.
Disks are not going away, and neither is tape. But we will see memory advance to complement, and eventually displace, some disk-based systems that have relied on spindles for performance.
The big wins in adding memory to the data center will not come by trying to replace disk, but rather by effectively enhancing it. We often mistake the decline in memory prices as justification that disks will go away sooner, but I see it differently. The decline of memory prices makes the use of both technologies more applicable….performance when needed through memory, and the benefits of a high-capacity, low-cost persistent storage layer for a never-ending amount of content.
Over time (and I use these words carefully because it may be five years, or it might be ten) memory as a persistent storage media will become a more effective means in and of itself. But this will require more in terms of reconfiguring data centers than might be fully realized. New persistent storage systems that rely on memory for persistent storage not only need the basic components in place, but also years of maturity to where all of the exceptions and error codes can be easily handled in a standards-based manner understood by multiple vendors. That day is a bit further off than the current headlines might indicate.
In the meantime, we have new ways to make use of memory that did not exist a few years ago giving us the ability to:
- complement and enhance our existing storage systems
- right-size memory-based caching solutions in terms of IOPS, bandwidth, low latency, and capacity
- deploy memory as a network resource that is addressable by any application server accessing any storage system
- retain existing applications without modification
- reduce the amount of active administration and eliminate the need for manual data movement to optimize performance
- intelligently improve our ability to rapidly access large file repositories
Individually, these capabilities might seem straightforward. Taken together, they expand and open up the architectural options for data center managers. But most importantly, they represent an immediate opportunity for IT professionals to dramatically improve current data center performance without causing unnecessary and premature overhauls of existing data center equipment.
By freeing data center managers from having to buy disks for performance, they will now be able to change the way they purchase and configure storage. The impact will reshape our map of modern data centers.
The world is moving away from configuring for performance with disk. As the following chart shows, the spending on performance-optimized storage is declining because customers are tired of paying twice the amount per Gigabyte. And there is an overwhelming trend to move more things to capacity-optimized storage.
Why isn't this happening faster?
- Performance-oriented customers are still afraid that they will not be able to achieve their performance needs on capacity-optimized storage. But with the introduction of more memory in the data center in new an unique ways like scalable caching appliances, that changes completely. Now customers can have a performance insurance policy to move more data and more applications to low-cost, high-capacity storage. The result...performance-optimized storage spending declines faster than anticipated.
- Customers looking to increase their capacity-optimized storage to encompass more enterprise applications often have push back about performance. But now they can achieve the same performance or greater than traditional disk-oriented performance configurations. Watch out, the capacity optimized storage systems are about to get significantly more interesting. The end result...faster adoption of single-tier, capacity-optimized systems that are complemented by sophisticated clustered caching solutions to dynamically improve performance.