Everyone wants their data to be protected, but not everyone puts in the legwork required to ensure their data is safe. Most people take the “set it and forget it” approach when really they should be following the 10 percent rule.
Set It and Forget It
There are three common mistakes that businesses make when backing up their data:
1. No testing: When businesses back up their data, many assume their data is there, faithfully waiting for them in the same condition in which it was left. The truth is backup is not a flawless process. Glitches occur and hardware fails. The only way to make sure that data is intact and that all systems are go is to routinely test the data you’re backing up. When you test your data, not only do you have peace of mind, but you also have the opportunity to catch a problem before it becomes a disaster.
2. No planning: No one thinks that a disaster is going to happen to them, but the reality is that unforeseen events do take place. A lot of businesses back their data up, but they spend little time thinking about the recovery process. The first step is to think about the recovery and work backwards from there.
3. Backing everything up: Not all data is created equal. If your house were on fire, would you run in to save a ballpoint pen? No, you would run in to make sure no one was in the house. It’s the same thing with data. If disaster strikes, you want to make sure you can access the most critical data immediately.
OK, so now you know what to avoid when protecting your data. What can and should you be doing? First, you must understand the 10 percent rule.
What is the 10 percent rule?
Only 10 percent of your data is critical.
That’s right. That means that 90 percent of your company data is mostly static. Does that mean that you don’t need to protect that 90 percent? Not at all. It means that you should prioritize. As noted above, not all data is created equal. If your systems encounter a widespread failure, you want to have a plan in place that recovers the most essential information right away. That way, business downtime is reduced. If you don’t prioritize your data, you’ll waste your time recovering non-critical data and your downtime could be much, much longer.
So what exactly does critical mean?
Critical varies from organization to organization, but if a file does not change within a certain amount of time, it should be moved into a retention vault. Only changing data should be considered critical.
While all data is arguably important, organizations need a structured or tiered approach to ensure critical applications and systems are operational first. Once these systems are running and accessible, the static, non-critical files can be restored.
I once heard a story of an organization that burned down, with a total loss to the building and its contents. The IT manager had ONE tape with all the company’s data on it. The company hired security to escort the IT manager and tape to a location where the data could be restored. What was the big deal? There was $17 MILLION of A/R and all the company’s inventory records stored on that one tape.
Lesson – learn what your critical data is, and also to HAVE A BACKUP FOR YOUR BACKUP. There should never be a single point of failure.
I agree with Garry. If you’ve pinpointed the 10% of your data that is absolutely critical, you should probably back it up in a couple different places. The 2nd backup may not be updated as frequently, but you don’t want a single point of failure.
Arrrgh. I know how this particular thing feels. I had a back up and a back up of the back up – and clean-installed a new OS on my system. So imagine my shock when I found that the external hard drive was corrupted. Everything seemed messed up except for the movies stored in it. Weird. The other back up was on DVDs, which refused to show any data. I almost felt suicidal. Well, a friend ran data recovery software for about four days before recovering most of the data. And of course, that pulled back practically everything I had ever done on my comp for the last three years. Totally gross.
Now I simply delete what I don’t need. And keep cross checking stuff I’ve saved. I guess there is no infallible way other than printing the stuff 🙂 Luckily, in my work, I haven’t lost the old habit of making notes in a book.
Nerve-wracking. Like a horror story. One friend had no back up and had a huge power outage that burnt her hard drive, with no scope of ever recovering anything.
Very useful post, Jennifer. Thank you.
Hi Gary and Brady..thanks for your comments. Disasters always strike at the wrong time. We have seen it all — plumbing issues flood the server room, fires and electrical power outages, storms of all kinds. Even the recent earthquake and hurricane on the east coast should remind us that we are never completely safe from disasters. Off-site backups should be part of every company’s IT strategy.
And Vidya, thanks for your note too. We counsel our clients to think of their backup strategy starting with the recovery in mind. Additionally, we recommend disaster recovery drills be administered at least 2 times a year. This allows you to ensure that the data backups are verified and provides you with peace of mind that things will work if/when the need occurs to restore.
I will be posting more articles soon…I’m glad you all enjoyed them!
4 out of the 5 times my PC’s have crashed, I was able to recover just about everything I need to.
The one time I didn’t was brutal.
I didn’t actually lose a lot of money when it happened without a back-up service. But, I lost a lot of valuable time.
I’m happy with my current service, and it is a set it and forget type.
And, it’s already been proven that it works.
Great topic that needs to be shared, and, I will do so.
The Franchise King®
Thanks for sharing your 10% rule. I work for Symantec and we recently conducted a survey that revealed less than half of SMBs back up their data weekly or more frequently and only 23 percent back up daily. For many SMBs the lack of working disaster recovery and backup procedures is putting their business at risk. The survey also showed that 44 percent of SMBs would lose at least 40 percent of their data in the event of a disaster. The survey report can be found at: http://bit.ly/esaG0u
As we transitioned from a small to midsized company we consolidated most of our data to single database. This may seem to invalidate the 10% rules since you can’t restore 10% of a database. But as we grew it became clear we did not need nightly off site backups the entire database. Instead of complete cold backups we concentrated on the arch logs. I just did the math and the logs plus our other backups equal almost exactly 10 percent of our total data. Five years ago we lost our entire data center to fire, (we had full backups back then) I have been bordering on paranoia ever since and generally prefer a three tier backup system. But the RMAN recovery with arch logs (our 10%) is what we test at least twice a year.
Have to agree with garry absolutely