Pages

Thursday, November 29, 2007

Storage Challenge for Media Firms: High Definition

My last blog post on help designing a suitable storage solution for a creative media firm is generating some quality comments from lonerock, Paul Clifford, and others. Unfortunately, due to few other commitments, I wasn't able to further communicate with the executive at the creative media firm. Below is our last communication that highlights further details of his environment and the challenges faced by his firm.
Why are you considering moving from NAS boxes to SAN solution? Are you using NAS boxes from a specific vendor?
JD: Whenever we have needed to increase our storage we have usually ended up buying more NAS heads – which means we need to manage more equipment. Our equipment is purchased from Dell and is our preferred hardware vendor. Earlier this year we had anticipated growth and increased our storage at that time by 200%. We purchased 2x 2950 Dell Storage Servers with Microsoft SS 2003 giving us about 2.5TB of space on each. We used one of the servers for a live backup. Each of these machines are dual-quad core intel. Middle of the year – we are running low! So instead of buying a NAS, we are looking at the Celerra (though we will use it as a NAS head) as it seems to have a better scalability path. We would then use our current 2950 as render servers.
Are you able to share more details on what specific storage hiccups and access speed issues you are facing?
JD: We do a lot more HD animation work than before which is leading to larger render files, also some our client projects now tend to go on for a year or longer. And we need to keep them active on our systems. Our reluctance to remove old render files, tend to increase size dramatically over the lifespan of a project. Over long weekends, when our render farm is churning away files, we need to make sure we have a clear 200-250GB of space – but usually we are struggling and end up spending a lot of time pruning projects to clear space. Speed wise we don’t think we are in a bad shape – though we have 40 users hitting our NAS box, and it the monitoring device does show a 100% utilization and queing of requests. We were thinking of breaking in more NAS device and breaking up user groups based on that – however again we end up with more equipment and management issues.
How are workstations, rendering servers and NAS boxes currently connected? Can you provide further details of your current infrastructure?
JD: All connections are via a gigabit Ethernet, we have Cisco catalyst switches. Our current storage server is connected via a dual fiber link to the switch. Users typically open 3D files which contain 100 or more linked texture and material files, they work on these files and will typically send them over to our renderfarm using a render manager. The render servers pick up the request and start the rendering process, depending on the scene it could take anywhere between 20 – 40 minutes for the frame to be created and written to disk. Each frame can be 2-3mb in size. We usually do multiple passes, that is 5+ frames make up 1 frame. 30 frames = 1 second of animation, so it adds up!
What are you finding attractive about NS20 Celerra (another NAS head)?
JD: Scalability mainly and the ability to easily manage storage and dynamically carve space depending on requirements, we will initially go with 8TB of usable space and add more drives trays as and when we need them in future. Also, the backup system for celerra with their snapshop option looks pretty impressive.
What is a typical day to day workflow that utilizes and strains the current infrastructure?
JD: As our 3D visualization and rendering is our core, our biggest issue has always been render capacity and storage capacity. We are in good shape with our render capacity about 20+ servers, our storage is the current issue, we can render more in a shorter time frame, also our HD frames are twice or thrice the size they used to be. So there is a constant pruning (which means we may delete files we really shouldn’t be!) and a trying to archive projects to tape the moment they are done only to bring them back on again when a client comes back in a few months with changes.