This is a continuation from Memory in the Data Center - Part I
We all want higher throughput and lower latency data centers.
That translates to adding more memory. The question is how best to do that for maximum application performance at the least overall infrastructure cost?
Flash or DRAM drives? One approach is to take the new memory-based solutions and make them look like disks. That is to make flash or DRAM emulate the characteristics of a disk drive using some flavor of SCSI, FC or SATA disk-level operations to communicate with the overall system.
Examples of this implementation include both flash and DRAM based solid state drives, solid state arrays, PCI-based memory repositories, and static memory appliances.
This approach can benefit very specific data sets that are not likely to grow because the memory is being presented as a fixed LUN to the system, and resizing is difficult. It is tricky to resize a group of SSDs in an array when there are no more drive slots left, or add another PCI card to a full bus.
This memory-as-disk approach also applies to small data sets because adopting it forces you to forgo all of the benefits of low-cost, high-capacity storage. Granted you can manually move data around, but that is generally regarded as troublesome and time consuming maintenance that could be better spent on new initiatives.
One further complication with memory-as-disk approach is the difficulty in extracting the active data set from the entire file or volume. This can be near impossible to assess manually and therefore results in over-provisioning memory resources to accomodate the entire data file or LUN as opposed to the smaller percentage of active or “hot” data.
Finally, memory-as-a-disk assumes that the memory has disk level retention characteristics. So in the case of DRAM that requires a robust and highly available battery backup system, along with all of the storage management responsibilities of persistence. And in the case of flash some concerns remain about wear levels and reliability, all of which will improve and possibly be resolved over time, but still may cause near-term concern for enterprise IT departments.
An interesting alternative to deploying memory as a disk, is to use memory as a cache. This concept itself is not new as cache has been deployed at nearly every level of data center systems from L1 and L2 CPU based caches, to motherboard-attached memory, to storage subsystem caches, and even cache at the drive level.
The CPU-level and drive-level caching should and will continue for a long time to come. What I’d like to focus on now is the differences between client side caching (at the application server), storage side caching (at the subsystem), and caching in between in the network.
The end-node server and subsystem solutions have worked well in the past, particularly in a world where single servers connected to single storage systems, because in that case having memory on one side of the connection or the other was guaranteed to provide assistance. But in an increasingly networked world where many application servers are connecting to multiple storage systems, including the deployment of clustered file systems, having the memory and caching resources in the network makes a lot more sense.
No doubt that this line of thinking will stir some debate. And I agree that there will always be room for caching at the end-node server and storage systems. But the reality is that once you are in a multi-device world, the most efficient and effective use of a memory-based resource is to apply it to the maximum number of servers and storage systems. This is analogous to the migration from direct-attached to network storage many years ago.
With a memory-based caching resource in the network any application server requesting data from any storage system can benefit from the ability to cache frequently accessed data in high-speed memory compared to slower mechanical disk. This guarantees that in cases of shifting hot-spots or hot-systems the benefit of caching applies across the board.
Alternatives would be to put the maximum amount of cache memory within each server and storage system which falsely assumes that each system is under the exact same utilization all of the time. It is simply not an option for most customers.
Another major reason to cache in the network is for scalability. Caching at an end node devices means that each increment in cache requires another end-node device. Need more cache in your subsystem? Buy another subsystem. If the cache is in the network, that subsystem might last a lot longer, and in some cases customers might actually reduce the overall number of storage systems required to deliver a set performance level.
Over and over, we’ve seen valued system level technologies be accessible within the network. Servers have long been there, storage more recently, and caching with high-speed, high-performance memory is another step in the network direction.
Next up: Part III - Installation and Operation of Memory in the Data Center