Organizations have been doing backup and recovery for decades and many feel that they have reactive data protection under control. If an event like a power failure or natural disaster takes down their data center, they just use their replica site hundreds of miles away to continue operations and, if need be, recover their data from disk or tape or cloud storage as needed. It’s a pretty well-understood practice.
However, enterprises are now seeing the impact of cyberattacks such as ransomware, which alone is poised to exceed $265 billion in global damage costs by 2031. These problems differ from natural disasters or hardware or power failures in that someone is actively trying to prevent you from succeeding with a traditional recovery approach.
Plus, cyberattacks are getting more sophisticated – and that’s only accelerating with the advent of artificial intelligence, which has the ability to write and improve upon code. And launching a cyberattack is now easy with ransomware as a service, which means that people don’t need deep expertise to hold your data hostage or steal your data and sell it on the dark web.
It’s also important to note that bad actors are now targeting the configuration files of applications and the datasets you would traditionally use to try to recover from an attack. Making it harder to get back to normal operations makes targets more willing to pay ransom.
These harmful entities are also going after data like personally identifiable information and payment information, which are covered by regulatory requirements, and more data regulations are coming soon. The European Union’s Digital Operational Resilience Act (DORA) take effect in January 2025, and similar requirements are likely coming to the Americas and APAC region.
The fact that the National Institute of Standards and Technology recently introduced the NIST Cybersecurity Framework 2.0 signals this new and evolving data and cybersecurity landscape.
This new landscape is extremely complex to navigate – especially in an environment where cybersecurity experts are costly, hard to keep, and in short supply. It calls for a new approach to data resilience, one that combines cyber readiness with traditional data protection.
To achieve operational resilience in this landscape, we believe there are seven critical layers to a proper data resilience strategy:
- Monitoring, posture assessment, testing, and incident response
- Anomaly detection and malware scanning
- Pen/patch/upgrade testing and DevSecOps
- Forensics and recovery in minutes
- A diverse partner ecosystem for compliance
- Efficient, dependable backup and recovery
- Reliable, secure, immutable infrastructure
Here’s how to secure your future with these seven critical layers.
Start with a posture assessment
Imagine you’re a brokerage and your average cost of downtime is $5 million an hour. If you got hit with a ransomware attack, could you survive being offline for two, three or four weeks? If your business goes offline because you can’t access your data, what does that do to your bottom line? What will you owe in regulatory fines? How will this impact customer trust?
It’s a massive problem that could result in a huge – potentially fatal – hit to your business.
Don’t panic. Take a step back. Employ your internal experts and/or work with a trusted partner to understand your cyber resilience, data protection, and overall operational resilience posture.
Bring in an independent voice
This is a broad remit. No one person in your organization will be able to identify the problem.
Also, be aware that internal teams might have blinders on. Your network team will likely think that the network is fine. Your infrastructure team will say the infrastructure is great. Or perhaps these teams will elect to use this exercise as a way to get extra budget in a predetermined area.
Bring in an independent voice to help you get a more realistic assessment of your posture. A third party who will have no agenda other than helping you understand where you are today, define your goals, and make the right decisions around the people, process, and technology you need.
Understand reactive technologies are no longer enough
Reactive approaches alone may have worked in the past. But in today’s world of frequent and increasingly sophisticated attacks, you need to be more proactive and much, much faster.
Move to a posture in which you are using artificial intelligence both to monitor for anomalous activity and scan for malware in your environment. Embrace the power of automation to act, whether that’s to notify an administrator of anomalies to investigate or to rapidly isolate at-risk systems.
Address data resilience across your entire environment
The rapid growth of data and the widespread implementation of IoT, edge computing, and storage are expanding the attack surface. Now you must ensure your data center is super secure and has data resiliency, cyber readiness, and rapid recovery at scale where your data – and all of the devices that touch that data – exist. In today’s hybrid world, that’s going to be anywhere and everywhere.
That can make ensuring data resilience complex and hard to get your arms around. Work with a trusted partner with the ecosystem, people, processes, and technology to streamline your journey and provide consistent protection from edge to core to cloud.
Adopt a reliable, secure, immutable infrastructure
Chances are good that you have reliable backup and recovery. You probably also have a reasonable amount of security around it. But be sure you also have robust infrastructure, which is characterized by data immutability, consistent deployment processes, and enhanced resilience against unexpected system failures.
With these critical capabilities, you can take immutable snapshots of your database environment and ensure that file data cannot be overwritten so that if your data is encrypted, you have the previous version that you can fail back to. That, and forensic capabilities to determine the right point to recover to prior to malware entering your environment, will empower you to recover from an incident very, very quickly.
Don’t throw the baby out with the bathwater
You’ll also want to explore how you can do penetration, patch, and upgrade testing at scale in a way that doesn’t impact your production environment. Plus, you’ll want to manage the governance of data, including how long it is retained, who can access it, and when it should be deleted.
You may be thinking all of the above is a lot to consider and tackle. But rest assured, you don’t need to replace everything you have and rebuild your environment from scratch.
By working with a proven partner, you can identify your biggest gaps, bring the right people across your organization to the table, and decide what you need today and going forward to ensure you have the appropriate data protection, security, compliance, and cyber resilience.
Ad