There has been a lot of activity recently about tiered storage. But I always ask the following question, "what storage administrator do you know who voluntarily wants to break up their data set to multiple tiers, with each requiring its own set of policies, backup and recovery procedures, and guidelines for when data should move up or down the tiers?" This isn't about reducing work, it is about increasing it.
Tiered storage is primarily about saving money. Administrators do it because they have to, not because they want to.
If money were no object, people would keep buying fast drives. If time were no object, people would buy high-capacity slow drives because they could reduce the overall number of spindles (something we wrote about earlier in Drive Reduction, Driving Down Drives, The End of An Antidote, and More on Drive Reduction with Compression).
Ultimately storage professionals look for the right balance of performance, capacity, and cost, which to date has meant adding work to the equation as they are forced to migrate data back and forth between tiers. Sometimes this can be semi-automated, but other times it requires manual intervention as we discussed in Please Don't Move My Data.
But things are starting to change. If performance can be achieved by adding a scalable caching appliance to the mix, then the necessity of having to split data between high-performance, high-cost drives and high-capacity, low-cost drives decreases. Now the option might be to focus on streamlining the overall architecture to a tier of high-capacity, low-cost storage and adding performance through caching.
We wrote an article that touched on this recently called Building an Accelerated Archive.
Many years ago Information Lifecycle Management was the hot buzzword. No longer. Now tiered storage has received a fair amount of attention, but we must find ways to remove extra steps from daily operations, not add them.