Generative AI has emerged as a powerful tool, heralded for its potential but also scrutinized for its implications. Enterprises will invest nearly $16 billion worldwide on GenAI solutions in 2023, according to IDC.
In this Help Net Security interview, Guy Guzner, CEO at Savvy, discusses the challenges and opportunities presented by in-house AI models, the security landscape surrounding them, and the future of AI cybersecurity.
Generative AI is stirring up concerns even as IT leaders remain confident about its benefits. Given that organizations are expected to start building their AI models in-house soon, if not already, what are the immediate security considerations they should keep in mind?
Organizations developing in-house AI models have a distinct advantage when it comes to critical security concerns. Currently, the widespread adoption of generative AI and other SaaS applications has led to challenges in standardizing security protocols due to integration complexities, creating friction between IT and business units. Developing in-house AI models helps stifle some of the SaaS overgrowth coming from the popularity of these third-party solutions.
However, one crucial aspect organizations must consider is the security of the algorithms used in their AI models. Generative AI uses algorithms and training data to create new data. When planning and building an in-house AI solution, it’s important to break down both elements and evaluate the security risks inherent to the parts.
The training data ultimately influences the content created by the model. Biases and risks in the training data will manifest in the models’ outputs, causing lasting consequences we don’t even fully understand yet. Therefore, organizations must meticulously assess the training data to identify and mitigate potential biases and risks.
Intellectual property concerns and regulatory implications have been highlighted when considering the risks of AI. In your opinion, how can organizations balance leveraging AI’s power and mitigating these concerns?
First and foremost, you have to establish clear ethical guidelines. Transparency and open communication about AI applications foster trust among stakeholders, providing a foundation for responsible AI use. Companies should clearly label data sources and respect the privacy and acceptable usage policies associated with the data, ensuring ethical usage and mitigating intellectual property concerns.
Collaboration plays a crucial role, too. Organizations benefit from working closely with peers and regulatory bodies to establish best practices and standards. Sharing insights on AI risks and mitigation strategies strengthens the collective defense against threats, creating a more secure AI landscape.
In the meantime, companies are quick to block third-party genAI applications because they fear sensitive data entry but users are likely to just input the information from a non-work device. The allure of applications like ChatGPT is too great to expect all employees to uphold restrictive guidelines, so they need to encourage safe usage instead.
Organizations should leverage tools designed to enhance user awareness, alert users to potential risks and guide them toward secure practices instead of just blocking them. For instance, if the tools detect sensitive data, users should be prompted to take necessary precautions like turning on private mode in ChatGPT or removing certain words and phrases from a prompt.
Considering the evolving nature of threats, how crucial is it for organizations to be proactive rather than reactive in securing their AI models?
Enterprises must learn from the challenges posed by rapid genAI SaaS adoption, the lack of standardized apps, and the complexities introduced by app integrations. Unbridled SaaS sprawl has challenged resource-strapped enterprises to enforce effective security controls at scale.
By embracing proactive security, organizations can anticipate potential risks, fortify their defenses and prevent breaches. It’s not just about patching vulnerabilities, but rather predicting potential vulnerabilities and implementing proactive measures to counteract them. By doing so, organizations will not only protect their AI models, but also foster an environment of innovation and growth, securing the knowledge that their digital assets are shielded against the challenges posed by SaaS sprawl and other cybersecurity threats.
Model theft, inference attacks, and data poisoning are some of the potential attacks against AI models highlighted by analysts. Which of the attacks listed are the most threatening or potentially damaging, and why?
Of the highlighted attacks, model theft and inference attacks are particularly menacing. Model theft allows malicious actors to steal proprietary models, essentially providing them with a shortcut to valuable AI solutions without the effort of development. This not only results in financial losses for organizations but also poses a significant competitive threat.
On the other hand, inference attacks exploit the responses of the AI model to deduce sensitive information from seemingly harmless queries. Such attacks compromise privacy and security, making them highly dangerous. AI’s ability to extract sensitive data can lead to various malicious activities, including identity theft and corporate espionage. Thus, these attacks are particularly concerning due to their potential to cause widespread damage and compromise both personal and organizational security.
Analysts predict that a range of security products will emerge to safeguard AI models, from bot management to AI/ML security tools. What are your thoughts on this prediction, and are there other cybersecurity areas you foresee gaining prominence?
That is a plausible prediction. Looking ahead, I foresee the rise of explainable AI tools. As AI systems become more complex, understanding their decisions becomes paramount, especially in critical applications like healthcare and finance. Tools that enhance the interpretability of AI models ensure transparency, enabling organizations to identify potential biases or vulnerabilities that might impact the model’s integrity.
Additionally, with the increasing prevalence of collaborative AI development, federated learning security will become crucial. Ensuring the security of the distributed learning process will be necessary to prevent data leakage and model manipulation during collaborative training efforts.