Pages

Friday, March 02, 2007

Distributing Desperate Housewives to Ten Millions

Now the title and image caught your attention, the big let down is this post has no housewives to offer! It is about distributing episodes of ABC Television show “Desperate Housewives” over Internet … or may be not even that!!

After my previous post P2P powered Devices … coming soon?, Newell Edmond co-founder of GridNetworks forwarded me an interesting paper on Video Internet. And this paper led me to March 2, 2006 column by Robert Cringely Peering into the Future: Why P2P is the Future of Media Distribution even if ISPs have yet to Figure that out.
"Desperate Housewives," in its puny 320-by-240 iTunes incarnation, occupies an average of 210 megabytes per episode. A full-resolution version would be larger still. In theory, it would be four times as big, but practically it would probably come in at double the size or 420 megabytes. But let's stick with the little iTunes version for this example.

Twenty million viewers, on average, watch "Desperate Housewives" each week in about 10 million U.S. households. That's 210 megabytes times 10 million downloads, or 2.1 petabytes of data to be downloaded per episode. Fortunately for the download business model, not everyone is trying to watch the show at the same time or in real time, so iTunes, in this example, has some time to do all those downloads. Let's give them three days. The question on the table is what size Internet pipe would it take to transfer 2.1 petabytes in 72 hours? I did the math, and it requires 64 gigabits-per-second, which would require an OC-768 fiber link and two OC-256s to fulfill.
Even though, Cringely was discussing bandwidth challenges of transferring one episode of Desperate Housewives, my mind wandered off to storage infrastructure side of the equation.

What type of storage infrastructure ecosystem will someone need to fulfill Ten million requests for distributing one episode of Desperate Housewives?

In my opinion, a storage infrastructure built around monolithic centralized storage most probably wouldn’t be practical. But this post is not about my opinion. It is about yours, so chime in with your thoughts on potential solution to this problem.

Show your design prowess or extol virtues of your favorite storage vendors with your storage ecosystem design. All responses are welcome.

2 comments:

  1. Anil

    I don't see this as being a storage issue; chances are, as each episode is made available, you'd have 10 million requests (the number at any one time being dependent on bandwidth available and client speed) to serve; the data coming through a web server type application. 210MB is not a lot, so could all be cached by the webserver and dependent on cache size and number of different programs being recalled, very little disk I/O would take place. Even if it did, today's large arrays can cache as well, so they'd probably have no issue delivering this data to multiple web server hosts.

    ReplyDelete
  2. Chris,

    It is a data delivery issue and storage impacts it. IMO, a simple design based on webserver backend by storage will have serious performance issues. Limitations at webserver and at storage even with cache will show up.

    Anil

    ReplyDelete