A quick introduction to understanding, improving, and managing performance
Diagnosing and troubleshooting system performance across hundreds or thousands of application servers and dozens of NFS file servers is a big deal. Historically, it has been as much art as science because the visibility of I/O traffic across all the end points was not easily consolidated. Sure, network traffic analyzers exist, but digging in to the I/O activity is a whole different story.
The hardest part is that NAS devices present statistics on a per device basis. Application servers present statistics on a per server basis. Evaluating these end points does not provide an aggregated view. With many applications running in clustered or virtualized compute environments, and with data sets spanning multiple storage systems, it can be very difficult to get a complete picture.
Scalable caching appliances from Gear6 help customers see the forest as well as the trees. By placing a centralized resource in the network that can serve data from any storage system to any application server, the caching appliance is uniquely positioned to deliver aggregate data center statistics. This provides an invaluable resource to administrators trying to understand their overall system performance.
After understanding comes improvement. Here caching appliances enhance existing NAS systems by placing frequently accessed data in memory instead of disk, enabling much faster response times, and ultimately faster application performance. Finally, a set of management tools help customers monitor and adjust performance over time as workloads grow and change in makeup. These steps are outlined in the performance improvement cycle.
Often we hear the question, why can't I just put some more cache memory on my application server or my storage system?
You could. But then you are back to square one in terms of getting a true understanding of the entire system environment. Do you really want to monitor the cache utilization across hundreds of application servers and dozens of storage systems? Or might it be more effective to manage that in one place?
There will always be places to enhance performance across the food chain of applications, servers, caching, networks, and storage systems. But to fully assess and make use of a valuable resource like caching, the network is a smarter place to locate the resource. Not that you couldn't or shouldn't place caching resources on end-node devices, but rather that you should place a valued resource where it can scale and be shared across the maximum number of devices.