Advanced ransomware campaigns expose need for AI-powered cyber defense


In this Help Net Security interview, Carl Froggett, CIO at Deep Instinct, discusses emerging trends in ransomware attacks, emphasizing the need for businesses to use advanced AI technologies, such as deep learning (DL), for prevention rather than just detection and response.

He also talks about the shift in budget priorities in 2024 toward ransomware prevention technologies. He foresees AI, particularly deep learning, becoming more integrated into business processes, automating workflows, and shaping workplace experiences.

What are the emerging trends in ransomware attacks, and how should businesses prepare for them using AI technologies?

Recent data from Deep Instinct found that the total number of ransomware victims in 2023 increased significantly. Amazingly, there were more victims of ransomware attacks in the first half of 2023 than in all of 2022. Not only are we reporting on this uptick, but respected non-profit organizations like FS-ISAC are also acknowledging this problematic trend.

This clearly indicates to me that what we currently have as an industry is failing and, once again, we need a shift to combat the evolving threat landscape. Ransomware moved the goalposts on “detect and respond” approaches – it is just too fast to respond to, combined with older technology that cannot keep up with new variants. This is one of the reasons we are seeing an increase in victims.

The attacker techniques have changed; ransomware attacks are being carried out as large-scale campaigns, affecting a significant number of victims at once, such as what we saw this year with the Zimbra and MOVEit vulnerability attacks. With the rapid adoption of AI by bad actors, we’ll see a continued development of malware that’s more sophisticated than ever before.

Thanks to the advanced capabilities of AI, we can now prevent ransomware and other cyber attacks rather than just detect and respond to them. Responding is no longer good enough as the evidence shows, we need to go back to a prevention first philosophy, with prevention capabilities embedded at different points in infrastructure, storage and business applications using AI. This is the only way businesses can truly protect themselves from advanced forms of ransomware and threats, specifically by leveraging a more sophisticated form of AI to fight against AI threats – such as deep learning.

How does deep learning differ from standard machine learning models in identifying and mitigating ransomware threats?

Not all AI is created equal. And that’s especially evident when you compare deep learning versus machine learning-based solutions. Most cybersecurity tools leverage Machine Learning (ML) models that present several shortcomings to security teams when it comes to the prevention of threats. For example, these offerings are trained on limited subsets of available data (typically 2-5%), offer just 50-70% accuracy with unknown threats, and introduce many false positives. ML solutions also require heavy human intervention and are trained on small data sets, exposing them to human bias and error.

DL, in comparison, is built on neural networks, so its “brain” continuously trains itself on raw data. Because DL models understand the building blocks of malicious files, DL makes it possible to implement and deploy a predictive prevention-based security program – one that can predict future malicious behaviors, detecting and preventing unknown threats, ransomware, and zero-days.

The outcomes of using DL as a foundation are remarkable for a business and its cybersecurity operations. First – a constant and extremely high efficacy rate against known and unknown malware, combined with extremely low false positive rates compared with any ML-based solution. The DL core only requires an update once or twice a year to maintain that efficacy and, as it operates independently, it does not require constant cloud/lookups or intel sharing. This makes it extremely fast and privacy-friendly, without requiring any cloud analytics.

How can deep learning technologies reduce false positives, and what is the potential impact on organizational cost savings?

Security operation center (SOC) teams are inundated with alerts and potential security threats they need to investigate. With legacy ML tools, such as traditional AV solutions, it’s incredibly difficult for teams to identify which alerts are truly worthy of investigation versus noise. There are many reasons for this – but the “detect and respond” philosophy means you must collect A LOT of data, that is expensive to store and maintain, and as any SOC member would state, a very high false positive rate.

This then impacts the SOC effectiveness – they cannot protect the organization, and at the same time, it is having other impacts on the ability to sustain a SOC team. The volume and time-intensive nature of addressing false positive alerts are taking a toll on the mental health of security teams, with more than half of SOC teams indicating their stress levels have increased over the past twelve months due to “staffing and resource limitations.” Without proper technology in place, SOC teams already struggling with talent constraints are forced to focus on mundane monitoring tasks and spend their days chasing false positives.

DL-powered solutions address this problem head-on. They produce extremely low false positive rates because they’re very accurate, giving SOC teams back time to focus on real, actionable alerts and pinpoint threats faster, with greater efficiency. By spending time on real threats, they can optimize their threat posture and engage in more proactive threat hunting which significantly improves the risk posture of their organization.

As organizations start budgeting for 2024, what should they prioritize investing in ransomware prevention technologies?

With 62% of the C-suite confirming ransomware was their number one concern this past year, we’ll see businesses shifting their budgets in 2024 – investing in prevention technologies that stop ransomware, known and unknown threats, and other malware.

As a whole, the industry has traditionally relied on antiquated and reactionary solutions like endpoint detection and response (EDR) for protection. While EDR tools are still useful from a postmortem standpoint, if organizations only invest in those tools, they are “assuming breach” and hoping remediation efforts are successful. Clearly, given the evidence, this approach is failing rapidly year on year due to the threat landscape changing. Just like signature solutions eventually failed and we moved to EDR, EDR is at the same tipping point.

In fact, IDC recently predicted that there will be a “rebirth” of sorts in endpoint protection as organizations seek better EDR capabilities and gravitate towards offerings with more efficacy. We’re in a EDR post-honeymoon period, where predictive prevention comes into full effect, blocking attacks before they ever enter your network.

The only way to combat increasingly sophisticated AI threats is by shifting from an “assume breach” mentality to a proactive, preventative approach to cybersecurity. Security teams won’t win the battle against AI with legacy tools; rather, organizations require cybersecurity solutions that are natively built with DL models to mitigate the volume and velocity of evolving AI threats. In 2024, we’ll see organizations make room in their budgets to integrate advanced AI technologies into their cybersecurity strategies to enhance security resilience and mitigate the likelihood of successful attacks.

How do you foresee AI, particularly deep learning models, becoming more integrated into business processes in the coming year?

In 2023, we saw AI burst onto the scene; 2024 will be the year AI becomes part of business planning, processes, and decision-making. This includes automating workflows, optimizing processes, and prioritizing alerts we see with AI co-pilots, for example. These add-ons do not prevent, but merely aid at the moment.

Additionally, as AI becomes fully integrated, younger generations won’t have the same hands-on experiences around workplace tasks like troubleshooting, outages, and security incidents, as much of this will be automated by AI. For leaders, the question will become: how do we continue to build and shape people’s skills and careers when opportunities to learn the basic building blocks in the workforce have been removed? I expect this to be answered before the end of next year.



Source link