When I find interesting items related to caching I usually post on our blog. The thing is, there really hasn't been anyone promoting network-based caching until Gear6.With rising interest in flash memory and SSDs, I am finding storage caching quite intriguing. I decided to start from basics.
What problems does caching solve?
The major benefit of caching is in reducing the latency whether caching is part of the web, network, file system, storage device, processor or memory. What is latency? Any delay in response to a request.
One consistent theme struck me odd as I started studying caching is how often we suggest more bandwidth as a solution to the slow performance issues and how little focus we give to the latency side of the problem. What is bandwidth? The amount of data carried from one point to another in a given time.
Even in iSCSI world, we all hear how 10GbE will be the inflection point, indirectly giving the impression that bandwidth is the bottleneck in iSCSI adoption. What is the real bottleneck in iSCSI? Is it bandwidth or latency?
I guess it sounds more impressive "With 10GbE, the bandwidth will increase 10X so you will be able to push ten times of data but latency will only be reduced in half (approx)."
From the productivity aspects of users and applications, a predictable and quick response to a request seems to be considerably more important than the amount of data being transferred over a specified period. What good more bandwidth does if data needs to wait for processing? A balance between bandwidth and latency need to be considered in designing solutions.
In the end, my impression is that most of us tend to focus too much on bandwidth and too little on latency.