Safety first: Using GAI to mitigate security risks – Promoted Content


CIOs worldwide are grappling with the best ways to operationalise generative AI (GAI) in their organisations to improve customer experiences, increase operational resilience, and mitigate risk. Seemingly endless possibilities lie before you, and the difficulty lies in choosing the path forward.



With tools like ChatGPT and Bard available to the general public, the masses are experimenting with incorporating them into their job functions and everyday lives. That includes threat actors who’re now using GAI to exploit organisations’ security weaknesses and develop more sophisticated cybersecurity attacks.

The promise of GAI and cybersecurity

Thwarting these threats in real-time is nearly impossible without automating your security workflows. Luckily, as an IT leader, you’ve long been working to harness the power of AI and machine learning1 to protect your organisation. You likely already have an infrastructure that captures and matches patterns in your security operations data, so you’re well-positioned to capitalise on GAI to automate workflows that enable you to identify and respond to threats even faster.

When you safely implement GAI tools into your security workflows, you’re helping to arm your team with the information they need to find vulnerabilities and anomalies in real-time so that they can prevent advanced attacks faster and with ease. They’ll be able to do this all without spending valuable time training or futilely attempting to analyse the endless amount of data your organisation is generating every second.

Say an employee falls victim to a phishing scam, which is highly likely, given that 84% of security decision makers2 have seen an increase in such attacks in 2022. This breach will leave your systems vulnerable to attackers. Are they already in your system? Is your sensitive data appropriately protected from this threat? What steps can your team quickly take to prevent reputational and financial damage and regain a secure environment? GAI can tell you — in real time.

When we’re talking about security breaches, speed and scale are critical. 

According to a recent study, organisations that have a fully-deployed AI and automation program were able to identify and contain a security breach 108 days faster3 than those that didn’t. These organisations reported US$1.76 million lower data breach costs.

So, how can your organisation tap into GAI tools securely and effectively?

Beware of jumping into GAI without guardrails

It might be tempting for your team to use a tool like ChatGPT to quickly understand how to resolve an issue. However, the response you receive isn’t generated based on your specific data. It uses large language models (LLM) pulling from (sometimes outdated) public information that’s been available since the inception of the internet. This shadow IT can make your team members more susceptible to the GAI generating hallucinations — incorrect information presented as if it were accurate.

Beyond hallucinations, using GAI tools that run on enormous data sets can consume massive compute resources, making it extraordinarily expensive and impactful on carbon emissions4. Not to mention, it can take weeks or months to train the model (and your staff). And there’s the risk of leaking sensitive data if you share it with an unsecured tool.

Where does that leave you?

Safely implementing GAI to streamline your security workflows

To take full advantage of using GAI to mitigate risk — while limiting costs and helping to ensure your data remains secure — you need to find the most relevant pieces of data and pass them to GAI tools securely.

If your team passes limited contextual data (like your proprietary telemetry data) to the GAI tool, it can quickly analyse it and provide valuable, relevant insight on any anomalies, problems, or potential problems — and how to resolve them. The more specific data your team provides, the more relevant and valuable answers you’ll get, and the quicker you can act on it.

To isolate this relevant data, you need a unified data platform that seamlessly transforms all of your data into outcomes and all of your questions into answers continuously, in real-time. 

When you have this foundation in place, your team can tap into GAI, using apps and platform integrations that quickly surface the relevant information they need when they need it. 

Unlocking even more GAI possibilities

Search-powered technology — like the open and flexible Elastic Search Relevance Engine (ESRE) and Elastic AI Assistant — makes it possible for anyone on your security team to build GAI apps for your workflows and find relevant, organisation-specific data instantly, at scale. And then securely pass it along with prompts to a GAI tool, reducing (and often eliminating) incorrect answers, high costs, and security risks. 

GAI-driven cyber threats are evolving every day. Your data platform needs to provide you with the openness and flexibility to stay ahead of the changes so you can leverage any and all available GAI technology to keep your customers and critical business data safe.

Discover how Elastic can help you harness the power of GAI to mitigate security risks. 

Elastic, Elasticsearch Relevance Engine, ESRE and associated marks are trademarks, logos or registered trademarks of Elasticsearch N.V. in the United States and other countries, used here with permission. 

1. “Key IT initiatives reshape the CIO agenda,” CIO.com, 2023.

2. “Three-quarters of businesses braced for ‘serious’ email attack this year,” CSO Online, 2023.

3. “Cost of a Data Breach Report 2023,” IBM, 2023.

4. “The Generative AI Race Has a Dirty Secret,” Wired, 2023.



Source link