Pages

Monday, October 31, 2005

CDP, CAS, Audit, Time Travel, and Old Friend

It was great to see an old friend, Bill Bowerman joining the “Blog” trend by publishing his thoughts at ComputerWorld. At one time, Bill and I were colleagues at KOM Networks. In his blog, he wrote an interesting piece on CDP that resonates with me, having lived through the buzz of Virtualization, ILM, SAN Security and now CDP.

Also, his idea about CDP using WORM Optical solution reminded me of my discussion with him last July, outside Hilton in Downtown Toronto. I was reminiscing about how Content-Addressable Storage (CAS) vendors are claiming compliance with regulations by preventing the deletion of data from their storage. But none of CAS solutions can offer the “true” delete prevention and audit capability offered by the WORM Magneto-Optical (MO) and Optical disks.

In my opinion, preventing deletion of data is only one-fourth of the ‘audit’ equation for regulatory compliance, another one-fourth is the ability to prove the integrity of data in question, another one-fourth is ability to track any modifications made to the data in question, and last one-fourth is ability to see the modifications actually made to the data in question.

And the time travel capability can offer this ability to see chronologically the modifications actually made to the data. We talked about how the vendors who offer time travel capability on MO/Optical disks are missing a great opportunity by not extending this capability to magnetic disks.

As for CDPA (Continuous Data Protection & Availability), I believe there are easier way to achieve CDPA at both File level and at Block level.

File level CDPA is pretty easy to understand for anyone who has used DEC VMS which had versioning and journaling to achieve CDPA at file level. Of course at that time we didn’t give it fancy acronyms like CDPA. I got introduce to these features over a decade ago when I was working at Dow Chemical Company.

Versioning was great as every time a file was saved, VMS saved it with same name but incremented the version number (ex: WORD.TXT;23). So if we ever needed to discard the changes we made between two versions, we just needed to open the older version (ex: WORD.TXT; 15) and resave. Of course, those days storage capacity wasn’t plenty from today’s standards and we always complained about old versions taking up precious space. Almost everyone had a DCL script to delete old versions regularly to recover space.

Journaling offered the capability to recover changes we made in a file but lost them due to some failure before we could save the file.

I guess today’s File level CDPA will be some sort of versioning and journaling.

It may be easier to accomplish Block level CDPA by extending the current Snapshot technology. There are two components to storing data on any block level data storage device:

1. The blocks where the actual data is written, and
2. The blocks where pointers (metadata) to the actual data blocks are written.

Current snapshot technology only tracks the changes in the content of actual data blocks that need to be overwritten and then updates the pointers.

So how do you achieve CDPA using Snapshot technology? You are not going to get to true CDPA just by reducing the time interval between two PIT snapshots, as rightly pointed out by Bill. But you can achieve CDPA by continuously tracking the changes in pointers (metadata) blocks in addition to changes in actual data blocks.

I most probably need to further explain this CDPA concept in another blog entry as I need more time to draw a illustration explaining this concept.

Thursday, October 27, 2005

Training Feedback

Recently, I got few emails from the people who attended my training sessions for Storage Foundation class. It is always great to get feedback from the participants. It helps me improve the future training sessions.

... Thanks very much for the excellent instruction ...

... It was very useful to receive the SNIA training ...

Last couple of sessions were excellent and very interactive as I got good cooperation from people attending my class. It is usually tough to balance the needs of diverse group of people as most sessions include both people who know a lot as well as people who know very little about storage.

Tips for SNIA Storage Network Foundations exam (S10-100)

In addition to training handouts, review Education Tutorials, The Shared Storage Model White Paper, and The basic concept of SMI-S. Also, don't forget to take practice test available at SNIA website and review questions in training handouts.

Monday, October 24, 2005

Backup Tutorial

Today, a client asked me to build a technology-focused tutorial on Backup & Recovery for technical support and solution design staff. I am also compiling a list of reference materials that cover the topic without "vendor marketing" stuff for client purchase.

I am sharing this list for everyone's benefit. I haven't reviewed most of the material listed here yet except the first book. As I get access to these materials, I will post my impressions. Let me know your favorite material on this topic.

BOOKS

1. The Backup Book: Disaster Recovery from Desktop to Data Center, By Dorian Cougias, E L Heiberger, Karsten Koop

2. Unix Backup & Recovery, By W. Curtis Preston

3. The Disaster Recovery Handbook, By Michael Wallace, Lawrence Webber

4. Using SANs and NAS, By W. Curtis Preston

5. Oracle 9i RMAN Backup & Recovery, By Robert Freeman, Matthew Hart

6. Security Planning and Disaster Recovery, By Eric Maiwald, William Sieglein

Friday, October 21, 2005

Storage Training, Certification & Large Archive

I have been AWOL for a while ... busy with consulting in healthcare storage and providing training for SNIA Storage Networking Certification Exams.

I got my SNIA Certified Professional (SCP) and SNIA Certified Systems Engineer (SCSE) certifications so I thought I put them to good use by helping others understand storage networking and achieve appropriate SNIA certification. In a recent training session, it was interesting to find out from one student, who lives and works in London England, the need and unavailability of qualified SAN professionals in Western Europe. So, it may be a good place to try for new opportunities ... may be I will try some short term work across the pond.

Consulting-wise, I am working on compiling business and technical requirements and designing a high-level architecture for large capacity (2 - 10 Petabyte) Regional Archive for a client... an interesting project. I am also involved with another project for archiving as managed services.