This week's Network World Tech Update features a piece we wrote about centralized storage caching. It is short and straightforward, so hopefully you will enjoy the read. A key focus is on the use of open standards for caching which in the case of Gear6 scalable caching appliances is Ethernet, IP, and the Network File System (NFS).
I used this standards-based caching overview earlier this week at a software conference hosted by the 451 group. One panel was on software-as-a-service, or SaaS, with a couple of companies delivering SaaS discussing how they approach the market. At the end, an audience member from a large chip company asked what these SaaS companies might like to see from a hardware perspective to help them better deliver their offerings. One panelist responded, "More main memory. That is what we need to deliver our application effectively."
Adding more memory to computers is an age old quest. The issue is that adding more memory requires an intricate balancing act between vendors of processors, motherboards, memory chips, and operating systems. It isn't easy to get everything in sync as quickly as the market demand for more memory is increasing.
But while we wait for memory density to increase, there is a another straightforward way to increase memory to applications though a standards-based approach of centralized storage caching.
Instead of waiting for all of the starts to align on the processor, motherboard, memory chip, and operating system constellation, how about an open-systems approach to add more memory to the overall infrastructure?
By building a high-speed, high-capacity caching appliance that scales in the network, customers can do just that. The Network World article explains this in more detail.
This standards-based approach helped me explain why centralized storage caching is an important step for building scalable infrastructures. For companies delivering software-as-a-service, it might be worth a closer look.