Shadow AI is the latest cybersecurity threat you need to prepare for


Shadow IT – the use of software, hardware, systems and services that haven’t been approved by an organization’s IT/IT Sec departments – has been a problem for the last couple of decades, and a difficult area for IT leaders to manage effectively.

Similarly to shadow IT, shadow AI refers to all the AI-enabled products and platforms being used within your organization that those departments don’t know about. While personal use of AI application can be seen as harmless and low-risk, Samsung (for example) was instantly hit with repercussions when ChatGPT use by its employees lead to sensitive intellectual property being leaked online.

But the risk of shadow AI is threefold:

1) Inputting data or content into these applications can put intellectual property at risk

2) As the number of AI-enabled applications increases, the chance of misuse also increases, with aspects like data governance and regulations such as GDPR being key considerations

3) There is reputational risk related to unchecked AI output. With considerable ramifications related to regulatory breaches, this provides significant headaches for IT teams in attempting to track it

Mitigating risks brought on by shadow AI

There are four steps that should be taken to mitigate the threat that is shadow AI. All are interdependent, and the absence of any of the four will leave a gap in the mitigation:

1. Classify your AI usage

Establishing a risk matrix for AI use within your organization and defining how it will be used will allow you to have productive conversations around AI usage for the entire business.

Risk can be considered on a continuum, from the low risk of using GenAI as a “virtual assistant”, through “co-pilot” applications and into higher risk areas such as embedding AI into your own products.

Categorization based on the potential risk appetite for the business will allow you to determine which AI-enabled applications can be approved for use at your organization. This will be of critical importance as you build out your acceptable use policy, training and detection processes.

2. Build an acceptable use policy

Once your AI use has been classified, an acceptable use policy for your entire organization needs to be laid out to ensure all employees know exactly what they can and cannot do when interacting with the approved AI-enabled applications.

Making it clear what is acceptable use is key to ensuring your data remains safe and will enable you to take enforcement action where necessary.

3. Create employee training based on your AI usage and acceptable use policy, and ensure all employees complete the training

Generative AI is as fundamental as the introduction of the internet into the workplace. Training needs to start from the ground up to ensure employees know what they are using and how to use it both effectively and safely.

Transformative technology always has a learning curve, and people cannot be left to their own devices when these skills are so important. Investing now in your employees’ abilities to safely use generative AI will both help your organization’s productivity and help to mitigate the misuse of data.

4. Having the right discovery tools in place to monitor for active AI use within your organization

IT Asset Management (ITAM) tools have been working on AI discovery capabilities even before ChatGPT hit the headlines last year. Organizations can only manage what they are able to see, and that goes double for AI-enabled applications, as many AI-enabled applications are free and cannot be tracked through traditional means like expense receipts or purchase orders.

This is especially important for tools that have AI embedded within them, where the user is not necessarily aware that AI is in use. Many employees do not understand the implications of intellectual property in these circumstances, and active policing is critical with an ITAM solution that has software asset discovery for AI tools.

A strong security posture requires the implementation of all four of these steps; without all four pieces, there is a hole in your shadow AI defense system.

Conclusion

While no single industry is more susceptible to shadow AI risk than another, larger organizations or well-known brands are typically most likely to experience extensive reputational damage from its implications, and they should take a more cautious approach.

Industries and companies of all sizes must leverage the benefits of AI. However, having the right procedures and guidance in place as part of an integrated cybersecurity strategy is a crucial part of adopting this transformative technology.

AI has already made permanent changes to how organizations operate, and embracing this change will set companies up for future success.

Generative AI is yet another technology where preventing the threat at the perimeter can only be partially successful. We must detect what is being used in the shadows.



Source link