Wednesday, 1 January 2020

Cyber-resiliency best practices: Staying prepared for cyberattack

IBM Tutorial and Materials, IBM Certification, IBM Learning, IBM Online Exam, IBM Prep

It seems that major headlines every week focus on data breaches or cyber events against well-known, reputable businesses or government agencies. Cyberattacks are becoming more prolific and sophisticated, so it’s no longer a question of if it will affect your organization, but when. Certain cyberattacks such as ransomware can cripple an organization, if not shut it down completely, which is why all organizations need to focus on cyber-resiliency.

Cyber-resiliency is the ability to continue operation in the event of a cyberattack. While there are multiple aspects of cyber-resiliency, in this post I want to focus on storage resiliency, which should be designed around three key assumptions:

1. Compromise is inevitable.
2. Critical data must be copied and stored beyond the reach of compromise.
3. Organizations must have the tools to automate, test and learn to recover when a breach or attack occurs.

Let’s break down each of these aspects and look at what organizations can do to bolster their cyber-resiliency.

Compromise is Inevitable


While it’s nearly impossible in today’s world to completely avoid data breaches or other cyberattacks, there are certain practices that enhance security and help protect against attacks:

◉ Discover and patch systems
◉ Automatically fix vulnerabilities
◉ Adopt a zero-trust policy

However, when an attack comes, you need a plan to be able to respond and recover rapidly.

Critical data must be copied and stored beyond the reach of compromise


Organizations need to understand what data is required for their operations to continue to run, such as customer account information and transactions. Protected copies of this mission-critical data shouldn’t be accessible and manipulatable on production systems, which can be compromised.

There are several important points of consideration in protecting data:

Limit privileged users: Often times, threats come from internal actors or an external agent that has compromised a super user, giving the attacker total control and the ability to corrupt and destroy production and backup data. You can help prevent this by limiting privileged accounts, and only authorizing access on as-needed basis.

Generate immutable copies: It’s critical to have protected copies of your data that can’t be manipulated. There are multiple storage possibilities for ensuring the immutability of your most critical data, such as Write Once Read Many (WORM) media like tape, cloud object storage or specialized storage devices. A snapshot that can be mounted to a host is still corruptible.

Maintain isolation: You also need to maintain a logical and physical separation between protected copies of the data and host systems. For example, put a network airgap between a host and its protected copies.

Consider performance: Different methods of data protection come with different performance characteristics, such as copy duration (How long will the backup take?) and performance implications to production, recovery point objective (RPO; How current is my protected data?) and recovery time objective (RTO; How fast I can restore my data?). Organizations will need to understand the tradeoffs between their budgets and their business objectives.

Organizations must have the tools to automate, test and learn to recover when a breach or attack occurs


Build automation: Restoration and recovery normally include multiple, complex steps and coordinating between multiple systems. The last thing you want to worry about in a high-pressure, time-critical situation is the possibility of user error. Automating recovery procedures will provide a consistent approach under any situation.

Make it easy to use: Recovery methods should be straightforward enough to be handled by operators and not require calling 10 different engineers, and that applies especially in a high-pressure situation. Tools, such as push-button web interfaces that can launch an automated disaster recovery process, make recovery more accessible.

Practice makes perfect: Testing the recovery process often is important, not only to validate the process but to provide familiarity to the ones executing it. This can be achieved using recovery systems that won’t affect production systems.

It’s not just important to focus on cybersecurity and the prevention of cyberattacks; it’s equally important to recover and continue operations from attacks, when they occur.

Related Posts

0 comments:

Post a Comment