We've seen the emergence of the term "Tier Zero." It typically refers to the highest performing tier of storage within a storage array. But having this top tier also comes the responsibility to manage data up and down the tiers. That means defining policies for each tier, determining which data gets placed on each tier, protection for each tier, and capacity management for each tier, including tier zero.
Managing capacity within a memory-centric storage tier zero can be tricky. Because tiers in a block-based storage array represent LUNs, the systems are not particularly dynamic in the expansion or contraction of LUNs. (This is not to say that those capabilities do not exist, but it still isn’t that easy.) That rigidity, along with the required assignments of migration policies, coupled with the need to snapshot, backup, and protect each tier, can lead to a long list of to-do items for already overburdened administrators.
Centralized caching represents a top-performing I/O delivery engine which can be thought of looking like a top tier of storage, but is actually quite different.
Instead of taking memory and trying to make it look like a disk or persistent tier, centralized caching takes memory and makes it look like an intelligent cache. This means that the cache is dynamically populated from as much storage as you would like to support. Caching enhances existing storage configurations compared to replacing them.
When tier zero appears as a LUN, you are effectively saying to the application “you can have as much performance as you like, as long as it fits in this LUN.” With centralized caching you are saying, “you can have as much performance as you like across any amount of storage, and if you need more caching resources you can add them on-the-fly.” Quite a different approach, and one Gear6 sees as a much more effective way to deploy memory in the data center.
A reference deployment might include a scalable caching appliance strategically located in the network to provide rapid client responses and access to a clustered file system. The clustered file system does its function well…which is to provide infinite, easy-to-manage capacity. The caching appliance does its function well…which is to dynamically apply performance resources to shifting disk and system hotspots based on application requests. This is a winning combination.
So is caching a tier? Perhaps. But more importantly, caching is a more sophisticated, more dynamic approach than creating a fixed tier zero of memory based storage. Caching frees administrators from the hassles of figuring out what should or should not go in memory, letting applications drive own their needs and allowing for cache scalability to deliver maximum application performance.