Tony Asaro penned a short snippet about caching and Gear6 in his Computerworld blog. The best part of the post is his explanation that caching is a multi-level process. According to Tony, having more speed helps, similar to a relay race where having a fast runner support a fast runner leads to a win.
Caching has been used throughout data centers for a long time, and is likely to remain a key part of end-to-end architectures. Don't expect cache to completely disappear from individual disks, subsystems, or servers anytime soon. But when large amounts of cache are required, particularly for applications that are otherwise severely I/O constrained, it makes sense to centralize that caching resource into a coherent pool that can be shared across any number of servers and any number of storage devices. We're seen this shift before...remember when disks used to sit directly within or directly-attached to individual application servers?