Enhancing Blue Team Defense: The Power of AI
AI is transforming cybersecurity on both sides of the battle. As threat actors use AI to enhance and amplify their attacks, the Blue Teams responsible for identifying security threats in the operating environment are exploring how to leverage large language models (LLMs) to up their own games. LLMs show great promise for helping defenders, so it’s no wonder that teams are actively looking for ways to leverage them to help them do their jobs better and faster.
However, AI is not a magic solution. For Blue Teams to get the most out of the technology, they must first understand what LLMs can do well, then figure out which parts of their workflows could best benefit from AI-enablement.
What can LLMs do well?
Although the rapid pace of AI improvement makes predictions particularly difficult, there are certain strengths of LLMs that are unlikely to change much. These include:
- Content Generation and Manipulation — Creating/manipulating content, be it text, code, or images, per human instructions
- Knowledge Augmentation and Retrieval — More easily accessing relevant information from a database or document collection through query or chat
- Document Summarization — Extracting key data or facts from a document
- Language Translation — LLMs can translate text or code from one language to another
- Context Analysis & Interpretation — With proper model training, the model can surmise meaning and draw conclusions from provided content
- Instruction Following — Certain types of models can follow step-by-step directions with great specificity
While each of these strengths can help streamline Blue Team tasks, human oversight is still critical. For instance, in document summarization, a Blue Team member should review the output to make sure that everything the LLM adds is relevant to the Blue Team’s work and that it didn’t leave out important information or make things up.
Once security leaders understand what LLMs are good at, the next step is understanding their specific use cases and which strengths they pair with.
Aligning AI Strengths with Blue Team Needs
The key to maximizing AI impact is to find where it provides the most value for the least effort. These spots will usually be some of your most commonly invoked processes or workflows. You might think that Automated Incident Detection is the most obvious use case for LLMs. However, this has historically proven to be more difficult. Instead, look for quick wins in some of the other SOC (or SOC-adjacent) functions.
Cyber Threat Intelligence
Much of the work around CTI involves heavy research workloads and creating summaries, reports, emails, or similar deliverables. For example, intel teams are often responsible for everything from reviews of dark web forums to drafting threat landscape reports. LLM document summarization is a great way to help your team digest more information in the same amount of time. Also, content generation can help streamline the creation of the intel deliverables that keep stakeholders informed on pressing security matters.
Alert Triage, Incident Response, and Digital Forensics
Each of these use cases, though technically separate, are woven together through a set of six questions that each Blue Team must answer once they receive an alert:
- What does this alert mean?
- Was this an actual attack?
- Was the attack successful?
- What assets were affected?
- What did the attacker do (or try to do)?
- How should we respond?
The first three questions are the most impactful since the alert triage process requires answering them for every alert. If the answer to #2 or #3 is “no” then there’s no need to answer the remaining questions. Answering these questions quickly and accurately is critical for efficient triage, which makes them a prime target for AI assistance.
You can use a properly-trained language model to answer the first question by providing it with the alert details, along with contextual data like IP addresses, port information, and hostnames. With the right database to draw from, the LLM can also provide examples of both malicious and benign activity as well as guidance on how to judge whether the attack was successful or not. By cutting the time it takes for an analyst to answer these first key questions, AI has the potential to drastically accelerate the SOC’s ability to deal with incoming alerts.
Documenting After the Incident
After every incident, there are lessons that will prepare the Blue Team to prevent and respond to similar situations in the future. For those familiar with the PICERL model for security incident response — Prepare, Identify, Contain, Eradicate, Recover, Lessons Learned — the “Lessons Learned” phase is the key driver of continuous improvement for incident response.
LLMs’ document summarization capability can turn raw response notes into digestible content for other responders or stakeholders. Their content generation ability is also very helpful in creating rough drafts of incident reports to help provide a more detailed explanation of what happened and why. In either case, Blue Team members should review outputs to ensure accuracy, confirm all the key information was included, and verify no AI hallucinations occurred.
Grasping Every Advantage Available
Blue Teams play a vital role in the continuous battle against today’s threat actors, and they need every advantage at their disposal. After all, the bad actors are using AI too.
Blue Teams must implement AI solutions with specific and targeted goals. More importantly, despite AI’s expansive capabilities, each workflow must still be human-led. Understand what your LLMs can do, prioritize where they will make the most impact, and continuously train each model so your teams achieve the best outcomes.
Ad
Join our LinkedIn group Information Security Community!
Source link