Robin Harris at StorageMojo hosts a short video about Gear6 including an interview with our Chief Technology Officers, Nisha Talagala. The StorageMojo post is here, or you can view the embedded video below. Enjoy.
New Study From Industry Researcher TheInfoPro™ Confirms Increased Demand for Greater I/O Performance for Server Virtualization
Caching Solutions Address Industry-wide Need for Accelerated and
Sustainable I/O Performance Within Existing Server and Storage
Mountain View, Calif. – October 15, 2007 – Gear6,
accelerating storage for real time application performance, today
announced key findings from server and storage studies conducted by
TheInfoPro, an independent research network for the IT industry, that
confirms accelerated and sustainable I/O performance is a top priority
for data center managers considering server virtualization and/or
storage consolidation initiatives. Based on data from “TheInfoPro’s Wave 9 Storage and Wave 5 Server Studies”
it can be concluded that the ability to add sustainable I/O performance
without requiring complicated storage tiering or data migrations can
save companies significant time and money.
Blade server adoption has been discussed for a number of years now, but the form factor hasn't been embraced in the way it was initially expected. One of the main reasons is that they are often referred to as "hot little power-suckers." That can't inspire confidence in the data center managers now dealing with the near ubiquitous concerns of power, space, and cooling.
For most, the tried and trusted 1U and 2U rack mounted servers have been reliable and familiar workhorses, dramatically reducing the urgency of blade adoption.
But power is far from the only sticking point hindering blade server success.
In the configuration puzzle to cram CPU resources into a blade, and still keep them from delivering roasting-level temperatures, other items had to go....most notably disk drives (hey that is what network storage is all about right?) and also the amount of memory per blade. Unfortunately, you simply can't have all of your CPU, disk, and memory resources in a much denser form factor without shaving a little bit here and there.
The difficulty is that when you remove disks and memory from the equation, that means the CPUs are spending more time traversing a network to get their data. Removing the disks alone are a big item, as what may have been temporary stored on local disk is now gone, but further reducing the amount of RAM per blade means that the applications need to go out to disk more frequently than before.
This double-whammy of reduced memory and I/O capability within a single blade puts extra pressure on the network storage infrastructure within blade-centric data centers and often leads to I/O bottlenecks that can severely impact application performance.
Just as servers have changed their form factors, caches are in the midst of a similar transition. Once hidden within single servers or storage systems, caches can now be deployed as a shared network resource.
Scalable caching appliance such as CACHEfx from Gear6 enable customers deploying blades to externalize caching on the network and serve data instantly to I/O intensive applications. The end result is better blade utilization, and freedom to adopt blades regardless of disk-less or light-memory configurations. Scalable caching appliances can serve data from any number of storage systems to any number of clients. They can also expand on the fly to meet increasing application workloads.
Most of the blade server vendors are refining and reducing power footprints. That, combined with the availability of instant accessible I/O might cause data center managers to bring blades back from the future and closer in line with current planning.
But, as with many technology terms, not all IOPS are defined equally. Some people describe an I/O as getting an piece of data out of memory. Others describe it getting a piece of data off of a disk drive.
However, measuring I/O performance from memory or from a disk does not accurately reflect the performance that your end application will see in a network environment. What really matters to the application is how well the I/O serving device can receive and respond to data requests over the network.
As an example, let’s dig deeper into a cached NFS operation in a Gear6 environment. The caching appliance will need to perform the following steps to complete a cached I/O operation.
Step 1: Receive request from the network Step 2: Process the request Step 3: Access data in cache Step 4: Form reply Step 5: Send reply over the network
IOPS definitions that focus on memory access are only measuring Step 3. The speed at which you can get data out of a piece of silicon (for example, memory) does not imply an equal IOPS gain for the end application.
For applications to benefit from I/O improvements, those I/O improvements have to be available and accessible through standards-based protocols (for example, NFS), and standards-based interconnects (for example, Ethernet). Otherwise, the I/O improvements are only available to specialized applications running on the same machine as the accelerator device or component.
The good news is that the industry continues to deliver more IOPS through a variety of solutions. As long as you keep these definitions in mind, you’ll be in the best position to dramatically improve your data center efficiency and application performance.