Why do companies backup infrequently?

Businesses are on average backing up to tape once a month, with one rather alarming statistic from the same survey showing 10 percent were only backing up to tape once per year, according to a survey by Vanson Bourne.

Although cloud backup solutions are becoming more common, still the majority of companies will do their backups in-house. Sometimes they will have dedicated IT staff to run them, but usually it’s done in-house because they have always done it like that, and they have confidence in their own security and safekeeping of data.

Given this fact that IT personal are perhaps creatures of habit, and wouldn’t risk a cloud based back-up solution – due to security and/or data integrity – it then seems a little odd that backups are done as infrequently as the survey reveals or even that they are only done once per year by some companies.

The likely reason for this infrequency is due to the time factor involved. Many companies would run their backups on Friday evenings, in the hope for it to be completed by Monday business start. But with such large data pools, these backups might not complete in time, and are therefore often postponed for larger time frame windows.

Longer backup times

The I/O bottleneck caused by disk fragmentation is a primary cause for latent backup times. As data backup involves file access, fragmentation of data files is anticipated to have pronounced impact on the length of time a backup procedure may take. An entire data set needs to be read, and then copied elsewhere.

This data set could be spread across one volume or many. If a high number of additional I/Os are required to read files before they are transferred, backup speed is heavily impacted. At best, the result is a backup that is greatly slowed down, and at worst it is a failed backup.

Additional I/Os are needed when files are split into multiple pieces – fragments. It is not at all uncommon to see a file fragmented into thousands or even tens of thousands of fragments. The impact on backups of files in such a state is considerable.

Snapshots/CDP

It is common now especially with SANs to use Snapshots and CDP, but any block level changes would mean an increase in the replication traffic or increase in the Snapshot size – and larger snapshots would take longer to backup. Having said this we still need to defragment as fragmented volumes take considerable time to backup. In such situations, actually preventing fragmentation before it can occur is the ideal solution.

Prevention better than cure

So, what is the best way to handle the problem of fragmentation at the NTFS level? Well, prevention is better than cure. It is far better to have a way to leverage the Windows Write Driver so that it writes a file more intelligently to a large enough area of contiguous free space to write the file in one chunk, rather than split it up into fragments.

For example, if Windows was to write a file into 20 fragments, then the following would happen: The first fragment of the file would be written to the NTFS volume, this would then travel down the storage stack to the SAN controller, which in turn would write it out to the physical disks in its array. Then the next fragment would cascade down the storage stack, and the next, and so on, until all 20 fragments had travelled down the storage stack to the SAN.

Conversely, if the Windows Write Driver was to write a file in one contiguous (non-fragmented) chunk, only one I/O would travel down the storage stack. This not only provides a performance benefit to the Windows machine but can also help reduce I/O queues at the SAN level, and have the physical disks in the SAN array work less hard, which in turn will lengthen their life. Obviously by preventing these excess I/Os you will improve the backup times as well.

More about

Don't miss