How should organizations navigate the risks and opportunities of AI?


As we realize exciting new advancements in the application of generative pre-trained transformer (GPT) technology, our adversaries are finding ingenious ways to leverage these capabilities to inflict harm. There’s evidence to suggest that offensive actors are using AI and machine learning techniques to carry out increasingly sophisticated, automated attacks.

Rather than running from the potential of this evolving technology, individual organizations should be embracing AI tools in their cyber defense strategies. The opportunities and rewards of AI far outweigh any possible risks, particularly for those organizations working in partnership with an experienced cyber security provider.

How hackers are using AI

Thus far, cyber criminals have primarily utilized AI not for new varieties of cyber warfare, but to increase the effectiveness of strategies from their tried-and-true playbooks.

For example, phishing attacks have long been a common and successful technique utilized by attackers. Members of the public are gradually becoming more educated about the obvious red flags of phishing attacks — yet it only takes a single person to open the door to a cyberattack by clicking carelessly and entering privileged information.

Well-trained users are usually able to notice phishing efforts due to errors in the words or phrases employed by attackers within emails. But with the advent of sophisticated AI technologies, especially those leveraging large language models, adversaries can craft very plausible communications even when the target language isn’t their native language. The ability to utilize deepfake images and videos gives attackers even more options to exploit unexpecting victims. The more legitimate phishing attacks look, the harder they are to detect and stop.

We’re also starting to see signs of cyber attackers using AI to evade detection within systems they’ve already breached, as well as to create mutating malware that self-propagates new code to constantly change its appearance. As AI continues to advance, we expect to see additional applications that creatively utilize the technology — meaning that our cyber defenses will need to continually evolve as well.

Organizational safeguards

For organizations to use AI technology effectively, the process starts by educating every individual on staff about how these technologies work and the potential risks they pose. Any program of education needs to be paired with clear, well-enforced policy to guide organization’s use of AI technology.

Many individuals are curious about experimenting with ChatGPT, but may not fully understand the potential technological, operational or legal risks of leveraging AI services. Any company-sanctioned use of AI needs to comply with any regulatory compliance requirements and IT risk management strategies to ensure the impact associated with any risks is appropriately mitigated.

Incorporating AI into a cybersecurity plan

After accounting for these basic safeguards, organizations can begin exploring ways to incorporate AI into their established cybersecurity plan. One way to do that is by evaluating the NIST Cybersecurity Framework, a set of guidelines published by the U.S. National Institute of Standards and Technology. This framework, which is voluntary for organizations to adopt, nevertheless provides a sensible set of guidelines for mitigating organizational cybersecurity risks and protecting networks and data.

The NIST Cybersecurity Framework’s five functions — identify, protect, detect, respond and recover — represent the primary pillars for a successful cybersecurity program. AI can help address some of the complexities of cybersecurity within each function:

1. Identify: AI is especially useful at helping to identify and categorize organizational assets, whether hardware, software or people. But its application is also useful within risk monitoring, where it can help identify new and emerging threats to organizations in a far more adaptive way than in the past.

2. Protect: AI used in protective technologies, from network point-of-presence devices to firewalls to endpoint protection software, can help ensure delivery of critical infrastructure services and limit the impact of threats. For instance, vulnerability scanners can automatically identify outdated software, misconfigurations or weak security settings, then generate reports for remediation. This helps in proactively addressing vulnerabilities before they can be exploited by attackers.

3. Detect: AI is prevalent in this area, with behavioral-based anomaly detection utilized within security information and event management (SIEM) to identify deviations in normal behavior that might indicate malicious activity.

Used in intrusion prevention systems, machine learning algorithms can assist early detection of zero-day attacks and previously unseen threats within network security technologies. AI models are utilized to analyze emails, URLs and attachments to detect phishing attempts and malicious links. Natural language processing techniques are commonly used to analyze email content and identify suspicious patterns with email protection technologies. From there, alerts on potential cybersecurity events are sent out automatically, ensuring that they’re addressed in a timely manner.

4. Respond: Email protection provides a great example. Many modern protection systems not only detect phishing attempts leveraging AI technology, but take automatic and nearly instantaneous steps to respond to an attack. They monitor for threats in real time, determine the legitimacy of email sources, quarantine harmful messages and even automatically prevent future attacks from the same source.

Some AI expert systems are even using behavioral-based machine learning to enrich security alerts, automate the investigation of alerts and determine appropriate response measures to radically cut down the dwell time of attackers within an environment, which can dramatically reduce the overall cost of a breach.

5. Recover: After responding to a breach, it’s often necessary to restore any capabilities or services that were impaired during the breach. This forensic process can be aided using AI technology. While humans have their limitations when it comes to sifting through historical data for patterns, AI is much better suited for this task.

What the future holds

Many security tasks associated with cybersecurity — investigation of alerts, correlation of data and prioritization of IT operation activities — are fairly repetitive, and the more we can offset them with AI-based technology, the more we can leverage human cybersecurity experts to help solve harder, more-pressing problems, such as investigating and responding to potential threats.

AI will continue to evolve and improve rapidly, and as it continues to advance, so will the tactics employed by cybercriminals. This is why it’s important for organizations to be vigilant in monitoring the AI landscape so they’re aware of new developments and threats posed by these technologies and can adapt their programs to stay protected.

The race between hackers and defenders will continue to intensify, and AI will play an increasingly crucial role on both sides of that equation.



Source link