VMware recently published a paper benchmarking storage protocol performance, comparing:
- Fibre Channel
- Hardware iSCSI (with iSCSI and TCP/IP offload engine)
- Software iSCSI
According to the description, the focus was on cached runs to showcase protocol as opposed to system performance. They also chose 100% sequential workloads, stating that randomness has almost no effect on throughput or response time in these cases.
The one tricky part of the test is that Fibre Channel was measured using a 2Gb/s connection, and all of the IP based protocols using a 1Gb/s. That said, results reached about wire speed in most scenarios, which validates one longstanding networking maxim...when you have networks that are point-to-point, full-duplex, and switched, it generally doesn't matter which protocol you run. What is more important is do you like the overall architecture, it the solution easy to maintain and operate, and does it deliver favorable cost.
The takeaway from VMware (noting that FC tests were at 2Gb/s and others at 1Gbs/s):
...although Fibre Channel has the best performance in throughput, latency, and CPU efficiency among the four options, iSCSI and NFS are also quite capable and may offer a better cost‐to‐performance ratio in certain deployment scenarios.
I also came across an interesting blog entry also related to this paper's release.
On some in-house application-specific benchmarking that I’ve done, I actually saw overall better performance with an NFS datastore than with a software iSCSI datastore on the same filer. from Thinking Sysadmin blog
I hope virtualization fans will take these results for what they are worth...all storage protocols are valid in certain cases, but people should choose architectures, not protocols. For network attached storage fans, there should no longer be any hesitation about virtualization deployments. We covered this earlier in a post, NFS on the Rise for Virtualization. In particular, for NAS fans looking to have guaranteed performance, the complement of a scalable caching appliance in virtualization environments can make a lot of sense. We covered this in Overcoming Virtualization I/O Pitfalls.