Pages

Monday, June 25, 2007

3PAR: Reversal of Fortune

Last weekend, while browsing my archive on the mobile drive, I came across a text file with interesting quote. I don't know who said this and where it was published. But the timestamp on text file shows August 2002.
3PAR won't make it as far as BlueArc did, but will mostly likely fail for many of the same reasons. A box is a box is a box...."What's the price per megabyte?"....This is unfortunate, but what customers really want and need is faster, better, cheaper storage WITH integration with all the other pieces of the SAN and applications. Building a bigger, faster box takes a while. Certifying and integrating useful applications, host support, switch support and the ENDLESS list of combinations with this and HBAs takes a LOT longer. Not to mention costly and engineering-intensive real-world performance testing to prove it really is better.

As with many things, if it's not faster, cheaper and better, there isn't much motivation for large customers to take the risk; especially in this climate.
Write a comment or send me an email if you know the origin of above quote. Update: Anonymous comment pointed to B&S Message Board as the source of the above quote.

Five years later, I don't know about BlueArc but 3PAR seems to be on its way to becoming a successful established subsystem vendor.

At SNW, I heard praise for 3PAR by several customers who were using 3PAR subsystems and also prospects who were in evaluation phase. What was surprising that not one of them mentioned much-hyped thin provisioning to be the primary reason for selecting 3PAR. All pointed out the 3PAR volume manager and striping of data across available disk resources being the primary reason with comments like "HP EVA like capabilities in 3PAR go far beyond EVA."

Both, 3PAR marketing at SNW and CEO David Scott in Byte & Switch, seem to be highlighting thin provisioning as the main reason for their success in highly competitive subsystem market with very conservative large enterprise customers. Is it really so?

What are your reasons for selecting or not considering 3PAR subsystem?

Wednesday, June 20, 2007

Gear6 trailblazing Network Caching

Earlier this week, I had great conversation with Gary Orenstien and Jack O’Brien at Gear6. Here are the excerpts from our conversation.

How is Gear6 doing?

Gear6 seems to be doing well. Several units are currently in field being evaluated by various customers. No specific number of units provided, just a wide range between 10 and 100. Company has over thirty employees and financially all set in the near term. Company has started to focus CACHEfx on financial analytics, energy and animation segments and will expand focus by the end of the year.

What are the benefits of network based caching?

Network caching enables increased cache utilization, flexibility and scalability. Caching is moving from end devices to network and becoming a network resource.

What one factor is attracting customers to your caching solution?

By nature of caching, the obvious benefit to customer is performance. Most customers who come to Gear6 have performance problems, variable workload and demand certain Quality of Service. The success rate is very good with evaluations by customers as CACHEfx appliance doesn’t require forklift replacement.

How is Gear6 doing caching?

CACHEfx appliance doesn’t use any conventional mechanical disk storage internally, 100% RAM cache and is pass through to persistent storage. Robust single purpose appliance designed to do one job and do that job very well.

The caching is performed intelligently. The intelligence focus on how and where data is placed within the appliance. There are extensive built-in statistics. Most customers are impressed by network sniffer like capability.

In the past, cache was a constrained resource. Now, focus is on right-sizing cache. CACHEfx expands from quarter TB to multi-TB, can be preloaded with data from persistent storage and adjust to variable I/O profile.

What are the reliability, availability and scalability features of CACHEfx appliance?

It is a clustered appliance, scalable from quarter TB to multi-TB. The appliance can be expanded on the fly. Also, appliance only acknowledges writes only when persistent storage sends acknowledgment.

Is the CACHEfx installed at D.E. Shaw working with Solaris cluster?

Gary declined to comment on infrastructure details of customer. He claimed customer pleased with the solution.

Any plans to introduce network caching for block-level traffic? The present product seems to focus on NFS only.

The present focus is on NFS, market is large enough. The sweet spot is where customer is using 100+ concurrent clients accessing single dataset, most tend to be NFS. No firm plans for addressing CIFS or block level traffic. The primary industry focus on financial analytic, energy and exploration, electronic design, animation, biotechnology, and media, primarily HPC oriented tasks.

How does network caching stack up with parallel file systems and clustered storage?

Caching addresses I/O constrained systems rather than processing constrained. Parallel file systems and clustered storage solutions are capacity centric not performance centric, providing global namespace for ever expanding storage capacity. They are not low latency solution. Network caching is a complementary solution, capacity complemented by performance. Gear6 solution complements Netapp OnTAP GX, IBRIX, Isilon and Acopia.

Do you have any thoughts on potential application of CACHEfx in a Wide Area Filer Network environment?

The CACHEfx has enormous potential in variety of environment. But we are currently very focused on solving customer problems within the data center. We are open to partnerships in other areas.

Sunday, June 17, 2007

Bountiful Bandwidth Lagging Latency

Recently, I came across an interesting article published in 2004 comparing growth, reasons and handling imbalance between bandwidth and latency. Excerpts below are from Latency Lags Bandwidth, Recognizing the chronic imbalance between bandwidth and latency, and how to cope with it. By David A. Patterson, Communications of the ACM, October 2004/Vol. 47, No. 10.
In the time that bandwidth doubles, latency improves by no more than a factor of 1.2 to 1.4.
Reasons for Bountiful Bandwidth
“There is an old network saying: Bandwidth problems can be cured with money. Latency problems are harder because the speed of light is fixed – you can’t bribe God” – Anonymous.

Moore’s Law helps bandwidth more than latency.
Distance limits latency.
Bandwidth is generally easier to sell.
Latency helps bandwidth.
Bandwidth hurts latency.
Operating system overhead hurts latency.
Coping with Lagging Latency
Caching: Leveraging capacity to help latency.
Replication: Leveraging capacity to again help latency.
Prediction: Leveraging bandwidth to again help latency.
Marketing Latency Innovations
The difficulty of marketing latency innovations is one of the reasons latency has received less attention thus far.

Perhaps, we can draw inspiration from the more mature automotive industry, which advertises time to accelerate from 0-to-60 miles per hour in addition to peak horsepower and top speed.

Tuesday, June 12, 2007

Where do you focus, Bandwidth or Latency?

Since my first post about Gear6, Gary Orenstein and I have been exchanging emails discussing various aspects of storage caching and Gear6. Recently, he commented in response to my request for pointers on storage caching market and implementations:
When I find interesting items related to caching I usually post on our blog. The thing is, there really hasn't been anyone promoting network-based caching until Gear6.
With rising interest in flash memory and SSDs, I am finding storage caching quite intriguing. I decided to start from basics.

What problems does caching solve?

The major benefit of caching is in reducing the latency whether caching is part of the web, network, file system, storage device, processor or memory. What is latency? Any delay in response to a request.

Bandwidth Bias

One consistent theme struck me odd as I started studying caching is how often we suggest more bandwidth as a solution to the slow performance issues and how little focus we give to the latency side of the problem. What is bandwidth? The amount of data carried from one point to another in a given time.

Even in iSCSI world, we all hear how 10GbE will be the inflection point, indirectly giving the impression that bandwidth is the bottleneck in iSCSI adoption. What is the real bottleneck in iSCSI? Is it bandwidth or latency?

I guess it sounds more impressive "With 10GbE, the bandwidth will increase 10X so you will be able to push ten times of data but latency will only be reduced in half (approx)."

From the productivity aspects of users and applications, a predictable and quick response to a request seems to be considerably more important than the amount of data being transferred over a specified period. What good more bandwidth does if data needs to wait for processing? A balance between bandwidth and latency need to be considered in designing solutions.

In the end, my impression is that most of us tend to focus too much on bandwidth and too little on latency.

Wednesday, June 06, 2007

Wikibon, The Improvements Needed

As I mentioned previously, Wikibon project is very interesting and schedule permitting, I plan to monitor its progress. I see the value of collective intelligence and bringing down the barriers in market research and industry analysis segment. If the approach succeeds, it will revolutionize this industry, the way Wikipedia did to Encyclopedia business.

The intent of this post is not to dismiss the initiative as another hype of social networking era. All new experiments go through a phase of trial-and-error before finding their footing and niche. I feel Wikibon is currently in that early phase where Dave and his team are trying various things to see what sticks, what not and what will make them realize their vision.

The objective of this post is to help them during this early phase by making two very specific suggestions for improvements.

Lead with Content

Overall, Wikibon started with a good web presence. Only design suggestion will be to lead the presence with content and cleaner interface otherwise it just take away the community and participatory feel of the initiative. Some annoyances:
  • Too many choices and information crammed in to home page.

  • Unnecessary and excessive use of text boxes, fonts with different colors and sizes and slide style boxes and graphics.
As Chris Evans commented and I agree that to gain any type of mindshare, Wikibon need to highlight the content not the people.

Do you really need Wiki format?

It is great to see 340 articles on variety of topics already posted on Wikibon. Most articles seem to be "independent" in nature, written by individual authors containing only their opinion with very little scope to modify content by others. I found content to be more fitting for blog format instead of wiki.

This is a typical challenge in most wiki projects. Is the topic and content conducive to modifications by others? If content is more conducive to be commented by readers instead of modification then it is more fitting in a blog format. I am sure you will also be able to see the content that is likely to be modified and added with new information and the content that is likely to receive comments.

Compare the Wikipedia Backup page with following Wikibon pages for Backup articles. It doesn’t take long to identify pages that are more likely to be modified or content added by someone other than the original author/creator.
Implementing fail proof backup and recovery
Backup and recovery options
Backup and recovery techniques
Sizing up backup and recovery options
Data de-duplication and the low-end backup/restore choice
Checkout the storage market prediction trading feature at Wikibon. It is an excellent feature that has potential to leverage the power and knowledge of the community.

Time permitting, I may review Wikibon further.

Tuesday, June 05, 2007

Wikibon, An experiment in Collective Intelligence

Few weeks ago, David Vellante contacted me about his new project, Wikibon and invited me to attend Peer Incite research meetings. Wikibon is a project where he is trying to harvest and share the collective intelligence of IT community for market research, industry analysis and insights. Having previously founded the storage research group at IDC, it was no surprise that Dave picked enterprise storage as the first industry segment to target with Wikibon project.

What piqued my interest?

Considering industry analyst world being a walled garden with entry only allowed to chosen few who can pay hefty entrance fee, Wikibon is an interesting experiment. Any cracks in garden walls are a welcoming change for Average Joe like me. But my interest in Wikibon extend beyond just an open source experiment in IT market research. I am more excited about the harvesting and sharing collective intelligence aspects of this 'public' experiment.

How beneficial will it be for an organization to make decisions based on this collective intelligence instead of listening to chosen few with loudest voice or political connections? Unfortunately, considering the competitive advantage such approach offers, few organizations who experimented with collective intelligence internally willing to share and discuss their methods and findings publicly. I believe that blogs and wikis are not just external facing marketing communication tools for enterprises. They also make excellent methods for harvesting the collective intelligence of everyone with in an organization especially those operating in knowledge-intensive industry.

Unfortunately, Dave couldn't make me get up early enough in the morning to attend a meeting at 9:00am ET (6:00am on my coast), later turned out to be just a typo and time zone confusion. Finally, this morning I attended the Peer Incite research meeting on Data De-duplication topic. Even though, this topic doesn't excite me anymore [More in a later post. I have moved on to other exciting and new topics.], the affiliation of vocal participants and dynamics among the participants was interesting to observe.

So, what is my impression and feedback on Wikibon project, community web presence and Peer Incite meeting?

As mentioned before, Wikibon project definitely has piqued my interest irrespective of reasons aligning with Dave's vision or not. I am planning to monitor its progress, share my opinions, and participate and report as time permits.

[Too late in the night] I will try to continue my feedback on this project in another post.

Sunday, June 03, 2007

Blogging Hiatus

Last couple of weeks, I was absent from any blogging due to back-to-back trips to Anchorage and Princeton. Unlike Storagezilla going off the grid, my blogging hiatus was unintentional and due to the demands of the day job and personal life. The highlights of trips were experiencing the scenic beauty of Alaska for the first time, opportunity to play 18 holes at Bunker Hill Golf Course, visiting Princeton University and talking to couple of very smart people.

Note to the readers: Blog posts will be sporadic from June through August due to demands of few other personal initiatives.