Human error is the leading cause of data loss - Business Works
BW brief

Human error is the leading cause of data loss

Oscar Arean, Technical Operations Manager, Databarracks In our recent survey to gauge trends in attitudes and practices among UK IT professionals, human error ranked as the leading cause of data loss, with hardware failure following close behind, says Oscar Arean, Technical Operations Manager at Databarracks. The annual survey (of 400 IT decision-makers) focused on the use of backup and recovery, data security, cloud computing and storage.

Interestingly, in small organisations, 16% of the respondents cited human error as the major cause of data loss, compared with 31% in medium-sized companies. Other high-scoring causes of data loss included hardware failure (the top reason in large organisations at 31%) and data corruption (19%).

Human error has consistently been the biggest area of concern for organisations when it comes to data loss. People will always be your weakest link, but having said that, there is a lot that businesses could be doing to prevent it, so we would expect this figure to be lower

The figures we are seeing this year for data loss due to human error are too high, especially considering how avoidable it is with proper management. I think a lot of SMEs fall into the trap of thinking their teams are not big enough to warrant proper data security and management policies, but we would disagree with that.

In large organisations, managers can lock down user permissions to limit the access they have to certain data or the actions they are able to take – this limits the amount of damage they are able to cause. In smaller organisations, there isn't always the available resource to do this and often users are accountable for far more within their roles. That is absolutely fine, but there needs to be processes in place to manage the risks that come with that responsibility.

Of course, small organisations don't need an extensive policy on the same scale that a large enterprise would, but their employees need to be properly educated on best practice for handling data and the consequences of their actions on the business as a whole. There should be clear guidelines for them to follow.

Regarding large organisatoins, it isn't surprising that hardware failure is the main cause of data loss as the majority of large organisations will have more stringent user policies in place to limit the amount of damage individuals can cause. Secondly, due to the complexity of their infrastructure, and the cost of maintaining it, large organisations may find it more difficult to refresh their hardware as often as smaller organisations, so it is inevitable at some point it will just give out.

Just over half of the respondents this year reported owning a business continuity plan (BCP), which is a modest increase on last year. Split the data by size, however, and the results tell a different, if familiar, story.

In 2014, 42% of respondents from small organisations said they did not have a business continuity plan and they did not intend to create one in the next 12 months. A year later and it looks as though that sentiment was accurate (only 27% reported having a BCP). 68% of medium-sized organisations report that they have a BCP and 75% of large ones. However, only 42% of those with plans had tested them.

When I talk to customers about disaster recovery testing, they often cite a lack of time as a major blocker. Last year, it was the most common reason given by small organisations when asked why they hadn't tested in 12 months.

More worryingly, the second most common answer was 'I don't know'. It's my opinion that organisations that genuinely don't have the time and resources to perform testing must exhautively justify that decision; it's an essential piece of due diligence. Simply put, 'I don't know' isn't good enough.



For more information, please visit: www.databarracks.com.



Tweet article
BW on TwitterBW RSS feed