Everybody makes mistakes in the workplace and sometimes this can lead to sensitive information being put at risk. Philip Bridge discusses how to mitigate against these risks.
Complex environments
The complexity in today’s virtual IT environments, combined with the amount of data that streams through them, requires diligent IT administration.
Unfortunately, humans are not infallible. Teams are one accidental deletion or failed backup away from losing access to – or losing entirely – sensitive information.
The results of human error are wide and varied. It can lead to intellectual property falling into the wrong hands, put the organisation at increased security risks or result in crippling regulatory fines. It is, therefore, imperative that you invest in robust risk management policies.
A change in perception
The Information Commissioner’s Office (ICO) has been keen to change the perception that a data breach can only occur through the actions of someone outside the organisation. Instead, it defines a breach as “any event that results in the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data.”
Accidents are more prevalent than many think. One survey found that the accidental deletion of information was the leading cause of data loss, driving 41% of cases – far above malicious hacking. Even if there is an attacker from outside the organisation behind a breach, human errors that have resulted in failed data backups could mean the company is without vital event log information that would articulate where the attack originated.
Common accidents
The failure to document and execute established IT, retention and backup procedures is something that we see time and time again.
It could be because a test server moves into production, but no one has informed IT that the data is not being backed up. Inaccurate documentation that leads to IT administrators decommissioning a Storage Area Network (SAN) that is still in production is also common cause. The result is the same: data loss and employee embarrassment.
The amount of time the delete key is mistakenly pressed is astonishing. It is important that organisations do their due diligence and ensure the data they delete is truly no longer of value.
Data loss is also often caused by a failure to keep software up-to-date and install patches as and when they become available.
Days are busy and resources are stretched. However, failing to update security patches can result in them being left open to evolving security breaches. Data loss is also often caused by the simple failure to backup effectively. In a survey, we found that whilst three-in-five (60%) businesses had a backup in place at the time of loss, it was not working properly as thought. Unfortunately, the failure to establish and follow backup procedures, or test and verify backup integrity is a guaranteed recipe for data loss.
Similarly, the failure to test IT security policies effectively. Even the smallest failure can lead to devastating results, including critical data loss and huge expense. It is important to restrict IT administrator passwords only to required users and change them when an IT administrator leaves the company. Don’t take chances. Some of the worst data loss cases we see result from a disgruntled employee with a live password intentionally deleting large amounts of critical company data.
Don’t panic
What should IT departments do when the unfortunate happens to ensure the best chance of an effective resolution? Firstly, it is important to avoid panicking and rushing into action. It is important if data loss happens that companies don’t restore data to the source volume from backup, because this is where the data loss occurred in the first place. They should also not create new data on the source volume, as it could be corrupted or damaged.
Next, be confident in the skills and knowledge you have on your team. IT staff must educate the C-suite to avoid them making decisions that could do more harm than good. When specifically faced with a possible data loss event, the volume should quickly be taken offline. Data will be overwritten at a rapid pace; the volume should not be formatted to resolve corruption.
Have a plan. Staff should follow established processes and ensure data centre documentation is complete and frequently revisited to ensure it is up to date. IT staff should not run volume utilities or update firmware during a data loss event.
Finally, know your environment and the data within it. IT staff must understand what their storage environment can handle and how quickly it can recover. Knowing what data is critical or irreplaceable, whether it can be re-entered or replaced, and the costs for getting that data up and running to a point of satisfaction are important.
Staff must weigh up the costs and risks when determining what is most urgent – getting their systems up and running quickly or protecting the data that is there.
How to separate success from failure
Managing today’s IT environments requires diligent IT administration and effective data management policies. Humans are not infallible.
In many ways, it is what makes us human. It is time to acknowledge that an accidental deletion or failed backup can happen at any time. Rather, it is how you deal with it that will separate success from failure.
Philip Bridge, President, Ontrack
1 Reader's comment