Subscribe

Podcasts

Click here to access our Podcast/Media archive

Listen to Gear6 Podcasts on iTunes!

Contact

  • thoughtput (at) gear6.com
    Drop us a note with your thoughts or comments. Thanks!

« Blog Watch - Gear6 spotted | Main | Gear6 News: CACHEfx Launches! »

May 04, 2007

Comments

Storagezilla

Your timing is impeccable, I was just blogging the fact that you said you'd answer the question when I went looking for your URL to find that you just finished answering the question :)

Anil Gupta

Awesome! leveraging the blog to discuss your design philosophy and being interactive with your readers. Looking forward to reading your product launch and company growth.

kirby

Best to get the facts straight before throwing stones.

"A good example of this is file virtualization engines. Place one of those in your environment and keep your fingers crossed because there is generally no other way out in case of emergency."

A good example of self serving corporate blog smarm.

The fact is that a well designed file virtualization engine doesn't modify or change the file in any way. In an emergency (what ever that means) the file is readily accesible, and if the user ever wants to 'de-virtualize' for some reason its a straightforward process.

There is no reason to target file virtualization as an alternative to caching, BTW. FAN encompasses performance teirs based on high speed memory, allowing them to be seamlessly managed along with general purpose storage, retention, and archival media. We would welcome Gear6 to the SNIA FAN Task Force and look forward to your contributions for moving FAN forward.

Gary O

Kirby,
Thanks for your comments. Your point is well taken. The intent was not to target file virtualization, but rather to show the difference in design approaches. You are right that a well-designed file virtualization engine doesn't modify or change the file, but by definition would require "de-virtualization" if the file virtualization capability was no longer needed. If I understand correctly, this is quite different from a solution that allows a simultaneous "direct-access" approach.
Gary

gbush_byteandswitch

I'm curious how this works, I know specs weren't released but if its transparent, that would mean I just "plug" in the device into my existing LAN (keeping it layer 2, since layer 3 adds latency in milliseconds) and voila my NFS clients will get accelerated? Or am I going to have to create mount/modify stuff in my automount maps to get this going? I can see how you can get some transparency with the assistance of IP (layer 3) technologies like wccp/pbr on routers to redirect NFS traffic but I guess you can answer that one Gary O.

Assuming you had to mount through this device, wouldn't it be done via NFS? so, in some ways, its a clustered/memory-based NFS file server or proxy of some kind?

My last thought is, currently this device supports 256GB of memory(assuming a 32GB mobo capacity) that would be 8 servers in a cluster, are files placed in a concatenated fashion? Or in a strip-wise fashion across the servers? Since NFS clients mount using one IP, is there a virtual IP it connects to that hides the real IP's the servers are configured with? If not, how's data returned who's data crosses multiple servers (assuming there are multiple servers)? via Private/data LANs?

Well, I'll wait for the product to launch and allow customers to have their comments and successes. I'd also like to know the applications used to drive the cachefx and how the data will be collected.

I don't know, microsecond response time would be great! Considering process overhead, compression/decompression schemes and other miscellaneous things usually happening inside any product. If someone can put an inline analyzer and generate a pcap file and get a SRT (server response time) data for NFS, that'd be an easy buy for me.

gbush

garyo

Great questions and comments. We'll be sharing more technical detail over the next few months. In the meantime, we are happy to provide any info required to prospective customers one-on-one. Thanks, Gary

The comments to this entry are closed.