The CISO’s approach to AI: Balancing transformation with trust


As organizations increasingly adopt third-party AI tools to streamline operations and gain a competitive edge, they also invite a host of new risks. Many companies are unprepared, lacking clear policies and adequate employee training to mitigate these new dangers.

AI risks extend far beyond the usual suspects of IT and security departments, bringing new vulnerabilities to customer success, marketing, sales, and finance. These risks—from privacy breaches and biased algorithms to financial losses and regulatory issues—demand a new level of vigilance and preparation. New threats on the horizon also make it more important than ever to establish policies around AI sooner rather than later.

Due diligence for AI adoption

So, how should CISOs approach AI adoption? When weighing new AI tools, CISOs must examine the risk of a few key factors. These considerations apply to all tools that may leverage AI across all business departments, not just security tools that use AI.

The first is data handling practices, from collection and processing to storage and encryption, ensuring robust access controls are in place. Data privacy must also be paramount, with compliance measures in place for regulations like GDPR and CCPA, along with clear policies for anonymization and user consent. CISOs should also set guidelines for how new AI tools manage third-party data sharing, ensuring vendors meet the organization’s data protection standards.

Scrutinizing model security is key. CISOs need to look for protection against tampering and attacks on AI tools. Equally important is model transparency, seeking tools that can explain their decisions and be audited for fairness and bias. Error handling procedures, regulatory compliance, and legal liability should all be clearly defined. There needs to be a clear escalation path to the GRC and/or legal counsel when issues arise. CISOs must also assess AI tools’ integration with existing systems, their performance and reliability, ethical implications, user impact, scalability, vendor support, and how changes will be communicated to stakeholders.

It’s not just AI-focused tools that should be subject to these considerations. Other third-party tools may have small AI integrations automatically turned on without CISO visibility. For example, video conferencing platforms may have an AI transcription tool that automatically transcribes internal and external calls. In this case, the AI tool has touchpoints with company and customer data, meaning it should be reviewed and approved by CISOs and security teams before employees can leverage it.

Guardrails for responsible AI use

Beyond establishing guardrails for assessing AI tools, it’s also imperative that companies develop acceptable use policies around AI to ensure that every employee knows how to use the tools appropriately and mitigate risks. Every policy should cover a few essential topics:

  • Purpose and scope – Clearly define the objectives and boundaries of AI usage within your company, specifying which tools are authorized and for what purposes.
  • Permitted and prohibited uses – Outline acceptable and unacceptable applications of AI tools, providing specific examples to guide employee behavior.
  • Data security and privacy guidelines – Establish strict protocols for handling sensitive data, including encryption, access controls, and adherence to relevant regulations. Data accuracy checks are essential for preventing generative AI tools from outputting hallucinations.
  • Integration and operational integrity – Define guidelines for the proper integration and use of AI within existing systems and processes, ensuring smooth operation and minimizing disruptions.
  • Risk management and enforcement – Outline procedures for identifying, assessing, and mitigating AI-related risks, along with repercussions for policy violations.
  • Transparency and accountability – Establish mechanisms to document and justify AI-driven decisions, promoting transparency and building stakeholder trust.
  • Best practices and training – Provide comprehensive guidance on responsible AI use, including regular employee training covering all acceptable use policy aspects with company-specific examples.

Employee training is the most critical component of establishing guidelines and policies around AI. Without proper training, it’s difficult to ensure employees understand AI risks and how to mitigate them. For many companies, home-grown training programs may be best to ensure that they include company-specific use cases and risk examples. The less ambiguity there is for employees, the better.

It’s also important to communicate AI usage to your customers. If any AI tools ingest customer data, customers should be notified about what data is being used, what it’s being used for, and where the outputs are going. Customers should also be allowed to opt out of using their data with AI tools.

Conclusion

AI’s potential for transformation is limitless — as is its potential for introducing new risks. By establishing robust policies and guidelines around usage, practicing strong data management, conducting thorough risk assessments, and fostering a culture of security awareness, CISOs can enable their organizations to leverage AI’s potential while minimizing the risk of breaches and other issues.



Source link